Test Report: KVM_Linux_crio 20534

                    
                      ca4340fb5ae0bb74f259779cd383137dc2ab446a:2025-04-14:39132
                    
                

Test fail (10/321)

x
+
TestAddons/parallel/Ingress (151.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-345184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-345184 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-345184 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e12914ca-6f74-4966-9007-ca362d38742e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e12914ca-6f74-4966-9007-ca362d38742e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003983036s
I0414 10:54:34.562537  510444 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-345184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.773432134s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-345184 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.54
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-345184 -n addons-345184
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 logs -n 25: (1.212575897s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-702527                                                                     | download-only-702527 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:51 UTC |
	| delete  | -p download-only-858677                                                                     | download-only-858677 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:51 UTC |
	| delete  | -p download-only-702527                                                                     | download-only-702527 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:51 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-251180 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC |                     |
	|         | binary-mirror-251180                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38675                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-251180                                                                     | binary-mirror-251180 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:51 UTC |
	| addons  | disable dashboard -p                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC |                     |
	|         | addons-345184                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC |                     |
	|         | addons-345184                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-345184 --wait=true                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-345184 addons disable                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:53 UTC | 14 Apr 25 10:53 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-345184 addons disable                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:53 UTC | 14 Apr 25 10:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:53 UTC | 14 Apr 25 10:54 UTC |
	|         | -p addons-345184                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-345184 addons                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-345184 addons disable                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-345184 ip                                                                            | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	| addons  | addons-345184 addons disable                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-345184 addons                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-345184 ssh cat                                                                       | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | /opt/local-path-provisioner/pvc-120ee6d6-8650-470a-9b85-c8e61a164c70_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-345184 addons disable                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-345184 ssh curl -s                                                                   | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-345184 addons disable                                                                | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-345184 addons                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-345184 addons                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-345184 addons                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-345184 addons                                                                        | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:54 UTC | 14 Apr 25 10:54 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-345184 ip                                                                            | addons-345184        | jenkins | v1.35.0 | 14 Apr 25 10:56 UTC | 14 Apr 25 10:56 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 10:51:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 10:51:26.111526  511069 out.go:345] Setting OutFile to fd 1 ...
	I0414 10:51:26.111626  511069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 10:51:26.111636  511069 out.go:358] Setting ErrFile to fd 2...
	I0414 10:51:26.111643  511069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 10:51:26.111817  511069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 10:51:26.112424  511069 out.go:352] Setting JSON to false
	I0414 10:51:26.113282  511069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16437,"bootTime":1744611449,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 10:51:26.113343  511069 start.go:139] virtualization: kvm guest
	I0414 10:51:26.115341  511069 out.go:177] * [addons-345184] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 10:51:26.116921  511069 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 10:51:26.116915  511069 notify.go:220] Checking for updates...
	I0414 10:51:26.118206  511069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 10:51:26.119302  511069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 10:51:26.120420  511069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 10:51:26.121517  511069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 10:51:26.122790  511069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 10:51:26.124149  511069 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 10:51:26.156889  511069 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 10:51:26.158021  511069 start.go:297] selected driver: kvm2
	I0414 10:51:26.158034  511069 start.go:901] validating driver "kvm2" against <nil>
	I0414 10:51:26.158055  511069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 10:51:26.158726  511069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 10:51:26.158820  511069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 10:51:26.175123  511069 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 10:51:26.175178  511069 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 10:51:26.175476  511069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 10:51:26.175526  511069 cni.go:84] Creating CNI manager for ""
	I0414 10:51:26.175592  511069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 10:51:26.175605  511069 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 10:51:26.175676  511069 start.go:340] cluster config:
	{Name:addons-345184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-345184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 10:51:26.175816  511069 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 10:51:26.178409  511069 out.go:177] * Starting "addons-345184" primary control-plane node in "addons-345184" cluster
	I0414 10:51:26.179559  511069 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 10:51:26.179609  511069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 10:51:26.179621  511069 cache.go:56] Caching tarball of preloaded images
	I0414 10:51:26.179722  511069 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 10:51:26.179735  511069 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 10:51:26.180068  511069 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/config.json ...
	I0414 10:51:26.180098  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/config.json: {Name:mk48ab279bfdcbd59a5407912ea1fe603a69d732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:26.180267  511069 start.go:360] acquireMachinesLock for addons-345184: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 10:51:26.180332  511069 start.go:364] duration metric: took 48.447µs to acquireMachinesLock for "addons-345184"
	I0414 10:51:26.180355  511069 start.go:93] Provisioning new machine with config: &{Name:addons-345184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-345184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 10:51:26.180419  511069 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 10:51:26.182898  511069 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0414 10:51:26.183045  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:51:26.183104  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:51:26.198964  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0414 10:51:26.199427  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:51:26.200048  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:51:26.200077  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:51:26.200458  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:51:26.200666  511069 main.go:141] libmachine: (addons-345184) Calling .GetMachineName
	I0414 10:51:26.200807  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:26.201038  511069 start.go:159] libmachine.API.Create for "addons-345184" (driver="kvm2")
	I0414 10:51:26.201073  511069 client.go:168] LocalClient.Create starting
	I0414 10:51:26.201119  511069 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem
	I0414 10:51:26.481832  511069 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem
	I0414 10:51:26.921167  511069 main.go:141] libmachine: Running pre-create checks...
	I0414 10:51:26.921193  511069 main.go:141] libmachine: (addons-345184) Calling .PreCreateCheck
	I0414 10:51:26.921842  511069 main.go:141] libmachine: (addons-345184) Calling .GetConfigRaw
	I0414 10:51:26.922432  511069 main.go:141] libmachine: Creating machine...
	I0414 10:51:26.922454  511069 main.go:141] libmachine: (addons-345184) Calling .Create
	I0414 10:51:26.922641  511069 main.go:141] libmachine: (addons-345184) creating KVM machine...
	I0414 10:51:26.922661  511069 main.go:141] libmachine: (addons-345184) creating network...
	I0414 10:51:26.924033  511069 main.go:141] libmachine: (addons-345184) DBG | found existing default KVM network
	I0414 10:51:26.924731  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:26.924565  511092 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123560}
	I0414 10:51:26.924782  511069 main.go:141] libmachine: (addons-345184) DBG | created network xml: 
	I0414 10:51:26.924826  511069 main.go:141] libmachine: (addons-345184) DBG | <network>
	I0414 10:51:26.924838  511069 main.go:141] libmachine: (addons-345184) DBG |   <name>mk-addons-345184</name>
	I0414 10:51:26.924843  511069 main.go:141] libmachine: (addons-345184) DBG |   <dns enable='no'/>
	I0414 10:51:26.924848  511069 main.go:141] libmachine: (addons-345184) DBG |   
	I0414 10:51:26.924854  511069 main.go:141] libmachine: (addons-345184) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 10:51:26.924867  511069 main.go:141] libmachine: (addons-345184) DBG |     <dhcp>
	I0414 10:51:26.924873  511069 main.go:141] libmachine: (addons-345184) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 10:51:26.924879  511069 main.go:141] libmachine: (addons-345184) DBG |     </dhcp>
	I0414 10:51:26.924883  511069 main.go:141] libmachine: (addons-345184) DBG |   </ip>
	I0414 10:51:26.924888  511069 main.go:141] libmachine: (addons-345184) DBG |   
	I0414 10:51:26.924897  511069 main.go:141] libmachine: (addons-345184) DBG | </network>
	I0414 10:51:26.924907  511069 main.go:141] libmachine: (addons-345184) DBG | 
	I0414 10:51:26.930096  511069 main.go:141] libmachine: (addons-345184) DBG | trying to create private KVM network mk-addons-345184 192.168.39.0/24...
	I0414 10:51:26.999799  511069 main.go:141] libmachine: (addons-345184) DBG | private KVM network mk-addons-345184 192.168.39.0/24 created
	I0414 10:51:26.999846  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:26.999769  511092 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 10:51:26.999861  511069 main.go:141] libmachine: (addons-345184) setting up store path in /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184 ...
	I0414 10:51:26.999872  511069 main.go:141] libmachine: (addons-345184) building disk image from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 10:51:26.999959  511069 main.go:141] libmachine: (addons-345184) Downloading /home/jenkins/minikube-integration/20534-503273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 10:51:27.290129  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:27.289952  511092 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa...
	I0414 10:51:27.472026  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:27.471857  511092 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/addons-345184.rawdisk...
	I0414 10:51:27.472101  511069 main.go:141] libmachine: (addons-345184) DBG | Writing magic tar header
	I0414 10:51:27.472116  511069 main.go:141] libmachine: (addons-345184) DBG | Writing SSH key tar header
	I0414 10:51:27.472124  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:27.471979  511092 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184 ...
	I0414 10:51:27.472136  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184
	I0414 10:51:27.472143  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines
	I0414 10:51:27.472151  511069 main.go:141] libmachine: (addons-345184) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184 (perms=drwx------)
	I0414 10:51:27.472162  511069 main.go:141] libmachine: (addons-345184) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines (perms=drwxr-xr-x)
	I0414 10:51:27.472179  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 10:51:27.472189  511069 main.go:141] libmachine: (addons-345184) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube (perms=drwxr-xr-x)
	I0414 10:51:27.472202  511069 main.go:141] libmachine: (addons-345184) setting executable bit set on /home/jenkins/minikube-integration/20534-503273 (perms=drwxrwxr-x)
	I0414 10:51:27.472209  511069 main.go:141] libmachine: (addons-345184) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 10:51:27.472217  511069 main.go:141] libmachine: (addons-345184) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 10:51:27.472224  511069 main.go:141] libmachine: (addons-345184) creating domain...
	I0414 10:51:27.472233  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273
	I0414 10:51:27.472240  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 10:51:27.472273  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home/jenkins
	I0414 10:51:27.472297  511069 main.go:141] libmachine: (addons-345184) DBG | checking permissions on dir: /home
	I0414 10:51:27.472306  511069 main.go:141] libmachine: (addons-345184) DBG | skipping /home - not owner
	I0414 10:51:27.473563  511069 main.go:141] libmachine: (addons-345184) define libvirt domain using xml: 
	I0414 10:51:27.473584  511069 main.go:141] libmachine: (addons-345184) <domain type='kvm'>
	I0414 10:51:27.473594  511069 main.go:141] libmachine: (addons-345184)   <name>addons-345184</name>
	I0414 10:51:27.473601  511069 main.go:141] libmachine: (addons-345184)   <memory unit='MiB'>4000</memory>
	I0414 10:51:27.473624  511069 main.go:141] libmachine: (addons-345184)   <vcpu>2</vcpu>
	I0414 10:51:27.473628  511069 main.go:141] libmachine: (addons-345184)   <features>
	I0414 10:51:27.473633  511069 main.go:141] libmachine: (addons-345184)     <acpi/>
	I0414 10:51:27.473638  511069 main.go:141] libmachine: (addons-345184)     <apic/>
	I0414 10:51:27.473642  511069 main.go:141] libmachine: (addons-345184)     <pae/>
	I0414 10:51:27.473646  511069 main.go:141] libmachine: (addons-345184)     
	I0414 10:51:27.473650  511069 main.go:141] libmachine: (addons-345184)   </features>
	I0414 10:51:27.473656  511069 main.go:141] libmachine: (addons-345184)   <cpu mode='host-passthrough'>
	I0414 10:51:27.473681  511069 main.go:141] libmachine: (addons-345184)   
	I0414 10:51:27.473698  511069 main.go:141] libmachine: (addons-345184)   </cpu>
	I0414 10:51:27.473718  511069 main.go:141] libmachine: (addons-345184)   <os>
	I0414 10:51:27.473735  511069 main.go:141] libmachine: (addons-345184)     <type>hvm</type>
	I0414 10:51:27.473741  511069 main.go:141] libmachine: (addons-345184)     <boot dev='cdrom'/>
	I0414 10:51:27.473746  511069 main.go:141] libmachine: (addons-345184)     <boot dev='hd'/>
	I0414 10:51:27.473752  511069 main.go:141] libmachine: (addons-345184)     <bootmenu enable='no'/>
	I0414 10:51:27.473756  511069 main.go:141] libmachine: (addons-345184)   </os>
	I0414 10:51:27.473760  511069 main.go:141] libmachine: (addons-345184)   <devices>
	I0414 10:51:27.473767  511069 main.go:141] libmachine: (addons-345184)     <disk type='file' device='cdrom'>
	I0414 10:51:27.473781  511069 main.go:141] libmachine: (addons-345184)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/boot2docker.iso'/>
	I0414 10:51:27.473792  511069 main.go:141] libmachine: (addons-345184)       <target dev='hdc' bus='scsi'/>
	I0414 10:51:27.473797  511069 main.go:141] libmachine: (addons-345184)       <readonly/>
	I0414 10:51:27.473801  511069 main.go:141] libmachine: (addons-345184)     </disk>
	I0414 10:51:27.473810  511069 main.go:141] libmachine: (addons-345184)     <disk type='file' device='disk'>
	I0414 10:51:27.473815  511069 main.go:141] libmachine: (addons-345184)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 10:51:27.473822  511069 main.go:141] libmachine: (addons-345184)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/addons-345184.rawdisk'/>
	I0414 10:51:27.473827  511069 main.go:141] libmachine: (addons-345184)       <target dev='hda' bus='virtio'/>
	I0414 10:51:27.473834  511069 main.go:141] libmachine: (addons-345184)     </disk>
	I0414 10:51:27.473842  511069 main.go:141] libmachine: (addons-345184)     <interface type='network'>
	I0414 10:51:27.473847  511069 main.go:141] libmachine: (addons-345184)       <source network='mk-addons-345184'/>
	I0414 10:51:27.473853  511069 main.go:141] libmachine: (addons-345184)       <model type='virtio'/>
	I0414 10:51:27.473861  511069 main.go:141] libmachine: (addons-345184)     </interface>
	I0414 10:51:27.473867  511069 main.go:141] libmachine: (addons-345184)     <interface type='network'>
	I0414 10:51:27.473874  511069 main.go:141] libmachine: (addons-345184)       <source network='default'/>
	I0414 10:51:27.473878  511069 main.go:141] libmachine: (addons-345184)       <model type='virtio'/>
	I0414 10:51:27.473885  511069 main.go:141] libmachine: (addons-345184)     </interface>
	I0414 10:51:27.473895  511069 main.go:141] libmachine: (addons-345184)     <serial type='pty'>
	I0414 10:51:27.473903  511069 main.go:141] libmachine: (addons-345184)       <target port='0'/>
	I0414 10:51:27.473909  511069 main.go:141] libmachine: (addons-345184)     </serial>
	I0414 10:51:27.473918  511069 main.go:141] libmachine: (addons-345184)     <console type='pty'>
	I0414 10:51:27.473923  511069 main.go:141] libmachine: (addons-345184)       <target type='serial' port='0'/>
	I0414 10:51:27.473927  511069 main.go:141] libmachine: (addons-345184)     </console>
	I0414 10:51:27.473931  511069 main.go:141] libmachine: (addons-345184)     <rng model='virtio'>
	I0414 10:51:27.473937  511069 main.go:141] libmachine: (addons-345184)       <backend model='random'>/dev/random</backend>
	I0414 10:51:27.473943  511069 main.go:141] libmachine: (addons-345184)     </rng>
	I0414 10:51:27.473948  511069 main.go:141] libmachine: (addons-345184)     
	I0414 10:51:27.473951  511069 main.go:141] libmachine: (addons-345184)     
	I0414 10:51:27.473956  511069 main.go:141] libmachine: (addons-345184)   </devices>
	I0414 10:51:27.473962  511069 main.go:141] libmachine: (addons-345184) </domain>
	I0414 10:51:27.473968  511069 main.go:141] libmachine: (addons-345184) 
	I0414 10:51:27.478855  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:59:9c:f3 in network default
	I0414 10:51:27.479665  511069 main.go:141] libmachine: (addons-345184) starting domain...
	I0414 10:51:27.479689  511069 main.go:141] libmachine: (addons-345184) ensuring networks are active...
	I0414 10:51:27.479698  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:27.480328  511069 main.go:141] libmachine: (addons-345184) Ensuring network default is active
	I0414 10:51:27.480862  511069 main.go:141] libmachine: (addons-345184) Ensuring network mk-addons-345184 is active
	I0414 10:51:27.481287  511069 main.go:141] libmachine: (addons-345184) getting domain XML...
	I0414 10:51:27.481969  511069 main.go:141] libmachine: (addons-345184) creating domain...
	I0414 10:51:28.708982  511069 main.go:141] libmachine: (addons-345184) waiting for IP...
	I0414 10:51:28.709750  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:28.710101  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:28.710167  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:28.710103  511092 retry.go:31] will retry after 228.838711ms: waiting for domain to come up
	I0414 10:51:28.940839  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:28.941338  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:28.941375  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:28.941296  511092 retry.go:31] will retry after 385.369021ms: waiting for domain to come up
	I0414 10:51:29.328013  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:29.328479  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:29.328506  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:29.328422  511092 retry.go:31] will retry after 408.207348ms: waiting for domain to come up
	I0414 10:51:29.738156  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:29.738698  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:29.738733  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:29.738639  511092 retry.go:31] will retry after 456.828125ms: waiting for domain to come up
	I0414 10:51:30.197410  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:30.197920  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:30.197946  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:30.197875  511092 retry.go:31] will retry after 649.753269ms: waiting for domain to come up
	I0414 10:51:30.849033  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:30.849469  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:30.849499  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:30.849430  511092 retry.go:31] will retry after 705.945882ms: waiting for domain to come up
	I0414 10:51:31.557316  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:31.557837  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:31.557906  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:31.557824  511092 retry.go:31] will retry after 882.735401ms: waiting for domain to come up
	I0414 10:51:32.442082  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:32.442573  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:32.442601  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:32.442518  511092 retry.go:31] will retry after 1.356088776s: waiting for domain to come up
	I0414 10:51:33.799967  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:33.800476  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:33.800505  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:33.800428  511092 retry.go:31] will retry after 1.335921569s: waiting for domain to come up
	I0414 10:51:35.138127  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:35.138648  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:35.138676  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:35.138602  511092 retry.go:31] will retry after 1.557803108s: waiting for domain to come up
	I0414 10:51:36.698218  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:36.698681  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:36.698708  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:36.698663  511092 retry.go:31] will retry after 2.170215818s: waiting for domain to come up
	I0414 10:51:38.872180  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:38.872637  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:38.872681  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:38.872628  511092 retry.go:31] will retry after 3.23786792s: waiting for domain to come up
	I0414 10:51:42.112242  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:42.112669  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:42.112691  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:42.112640  511092 retry.go:31] will retry after 4.306698925s: waiting for domain to come up
	I0414 10:51:46.420953  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:46.421390  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find current IP address of domain addons-345184 in network mk-addons-345184
	I0414 10:51:46.421430  511069 main.go:141] libmachine: (addons-345184) DBG | I0414 10:51:46.421326  511092 retry.go:31] will retry after 4.313566958s: waiting for domain to come up
	I0414 10:51:50.739802  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:50.740297  511069 main.go:141] libmachine: (addons-345184) found domain IP: 192.168.39.54
	I0414 10:51:50.740340  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has current primary IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:50.740349  511069 main.go:141] libmachine: (addons-345184) reserving static IP address...
	I0414 10:51:50.740693  511069 main.go:141] libmachine: (addons-345184) DBG | unable to find host DHCP lease matching {name: "addons-345184", mac: "52:54:00:7e:43:2b", ip: "192.168.39.54"} in network mk-addons-345184
	I0414 10:51:50.819858  511069 main.go:141] libmachine: (addons-345184) DBG | Getting to WaitForSSH function...
	I0414 10:51:50.819894  511069 main.go:141] libmachine: (addons-345184) reserved static IP address 192.168.39.54 for domain addons-345184
	I0414 10:51:50.819907  511069 main.go:141] libmachine: (addons-345184) waiting for SSH...
	I0414 10:51:50.822492  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:50.822836  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:50.822870  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:50.823090  511069 main.go:141] libmachine: (addons-345184) DBG | Using SSH client type: external
	I0414 10:51:50.823118  511069 main.go:141] libmachine: (addons-345184) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa (-rw-------)
	I0414 10:51:50.823175  511069 main.go:141] libmachine: (addons-345184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 10:51:50.823205  511069 main.go:141] libmachine: (addons-345184) DBG | About to run SSH command:
	I0414 10:51:50.823218  511069 main.go:141] libmachine: (addons-345184) DBG | exit 0
	I0414 10:51:50.951482  511069 main.go:141] libmachine: (addons-345184) DBG | SSH cmd err, output: <nil>: 
	I0414 10:51:50.951847  511069 main.go:141] libmachine: (addons-345184) KVM machine creation complete
	I0414 10:51:50.952086  511069 main.go:141] libmachine: (addons-345184) Calling .GetConfigRaw
	I0414 10:51:50.952656  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:50.952865  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:50.953046  511069 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 10:51:50.953060  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:51:50.954416  511069 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 10:51:50.954432  511069 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 10:51:50.954438  511069 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 10:51:50.954445  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:50.957109  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:50.957484  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:50.957524  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:50.957614  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:50.957788  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:50.957951  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:50.958086  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:50.958261  511069 main.go:141] libmachine: Using SSH client type: native
	I0414 10:51:50.958507  511069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0414 10:51:50.958519  511069 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 10:51:51.062554  511069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 10:51:51.062592  511069 main.go:141] libmachine: Detecting the provisioner...
	I0414 10:51:51.062605  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:51.065745  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.066121  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:51.066152  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.066314  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:51.066520  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.066688  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.066850  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:51.067012  511069 main.go:141] libmachine: Using SSH client type: native
	I0414 10:51:51.067337  511069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0414 10:51:51.067353  511069 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 10:51:51.172013  511069 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 10:51:51.172113  511069 main.go:141] libmachine: found compatible host: buildroot
	I0414 10:51:51.172123  511069 main.go:141] libmachine: Provisioning with buildroot...
	I0414 10:51:51.172131  511069 main.go:141] libmachine: (addons-345184) Calling .GetMachineName
	I0414 10:51:51.172430  511069 buildroot.go:166] provisioning hostname "addons-345184"
	I0414 10:51:51.172465  511069 main.go:141] libmachine: (addons-345184) Calling .GetMachineName
	I0414 10:51:51.172696  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:51.175371  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.175730  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:51.175759  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.175884  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:51.176066  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.176220  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.176359  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:51.176560  511069 main.go:141] libmachine: Using SSH client type: native
	I0414 10:51:51.176808  511069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0414 10:51:51.176822  511069 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-345184 && echo "addons-345184" | sudo tee /etc/hostname
	I0414 10:51:51.293012  511069 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-345184
	
	I0414 10:51:51.293053  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:51.296106  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.296525  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:51.296556  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.296760  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:51.296953  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.297139  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.297260  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:51.297421  511069 main.go:141] libmachine: Using SSH client type: native
	I0414 10:51:51.297618  511069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0414 10:51:51.297633  511069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-345184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-345184/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-345184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 10:51:51.408388  511069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 10:51:51.408426  511069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 10:51:51.408471  511069 buildroot.go:174] setting up certificates
	I0414 10:51:51.408485  511069 provision.go:84] configureAuth start
	I0414 10:51:51.408497  511069 main.go:141] libmachine: (addons-345184) Calling .GetMachineName
	I0414 10:51:51.408866  511069 main.go:141] libmachine: (addons-345184) Calling .GetIP
	I0414 10:51:51.412156  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.412560  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:51.412661  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.412776  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:51.414926  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.415349  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:51.415371  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.415487  511069 provision.go:143] copyHostCerts
	I0414 10:51:51.415553  511069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 10:51:51.415703  511069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 10:51:51.415784  511069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 10:51:51.415854  511069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.addons-345184 san=[127.0.0.1 192.168.39.54 addons-345184 localhost minikube]
	I0414 10:51:51.837356  511069 provision.go:177] copyRemoteCerts
	I0414 10:51:51.837447  511069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 10:51:51.837475  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:51.840873  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.841242  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:51.841272  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:51.841433  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:51.841660  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:51.841851  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:51.841993  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:51:51.925440  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 10:51:51.952846  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 10:51:51.976271  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 10:51:51.998407  511069 provision.go:87] duration metric: took 589.906566ms to configureAuth
	I0414 10:51:51.998440  511069 buildroot.go:189] setting minikube options for container-runtime
	I0414 10:51:51.998610  511069 config.go:182] Loaded profile config "addons-345184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 10:51:51.998687  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:52.001736  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.002096  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.002138  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.002282  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:52.002452  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.002622  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.002729  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:52.002888  511069 main.go:141] libmachine: Using SSH client type: native
	I0414 10:51:52.003101  511069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0414 10:51:52.003116  511069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 10:51:52.463277  511069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 10:51:52.463322  511069 main.go:141] libmachine: Checking connection to Docker...
	I0414 10:51:52.463333  511069 main.go:141] libmachine: (addons-345184) Calling .GetURL
	I0414 10:51:52.464701  511069 main.go:141] libmachine: (addons-345184) DBG | using libvirt version 6000000
	I0414 10:51:52.467133  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.467519  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.467565  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.467761  511069 main.go:141] libmachine: Docker is up and running!
	I0414 10:51:52.467779  511069 main.go:141] libmachine: Reticulating splines...
	I0414 10:51:52.467788  511069 client.go:171] duration metric: took 26.266702003s to LocalClient.Create
	I0414 10:51:52.467810  511069 start.go:167] duration metric: took 26.266774393s to libmachine.API.Create "addons-345184"
	I0414 10:51:52.467822  511069 start.go:293] postStartSetup for "addons-345184" (driver="kvm2")
	I0414 10:51:52.467835  511069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 10:51:52.467853  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:52.468088  511069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 10:51:52.468111  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:52.470339  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.470719  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.470742  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.470872  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:52.471032  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.471189  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:52.471353  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:51:52.553475  511069 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 10:51:52.557424  511069 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 10:51:52.557454  511069 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 10:51:52.557532  511069 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 10:51:52.557562  511069 start.go:296] duration metric: took 89.731892ms for postStartSetup
	I0414 10:51:52.557606  511069 main.go:141] libmachine: (addons-345184) Calling .GetConfigRaw
	I0414 10:51:52.558251  511069 main.go:141] libmachine: (addons-345184) Calling .GetIP
	I0414 10:51:52.562597  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.563079  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.563104  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.563445  511069 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/config.json ...
	I0414 10:51:52.563631  511069 start.go:128] duration metric: took 26.383200701s to createHost
	I0414 10:51:52.563656  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:52.565967  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.566416  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.566441  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.566573  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:52.566753  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.566922  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.567060  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:52.567230  511069 main.go:141] libmachine: Using SSH client type: native
	I0414 10:51:52.567472  511069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0414 10:51:52.567484  511069 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 10:51:52.671824  511069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744627912.650196574
	
	I0414 10:51:52.671860  511069 fix.go:216] guest clock: 1744627912.650196574
	I0414 10:51:52.671874  511069 fix.go:229] Guest: 2025-04-14 10:51:52.650196574 +0000 UTC Remote: 2025-04-14 10:51:52.563643919 +0000 UTC m=+26.488720480 (delta=86.552655ms)
	I0414 10:51:52.671937  511069 fix.go:200] guest clock delta is within tolerance: 86.552655ms
	I0414 10:51:52.671945  511069 start.go:83] releasing machines lock for "addons-345184", held for 26.491600325s
	I0414 10:51:52.672331  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:52.674028  511069 main.go:141] libmachine: (addons-345184) Calling .GetIP
	I0414 10:51:52.676560  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.676916  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.676949  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.677098  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:52.677620  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:52.677779  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:51:52.677947  511069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 10:51:52.678009  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:52.678018  511069 ssh_runner.go:195] Run: cat /version.json
	I0414 10:51:52.678036  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:51:52.680851  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.680879  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.681244  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.681277  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.681324  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:52.681349  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:52.681408  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:52.681607  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.681605  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:51:52.681782  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:51:52.681807  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:52.681933  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:51:52.681990  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:51:52.682063  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:51:52.775106  511069 ssh_runner.go:195] Run: systemctl --version
	I0414 10:51:52.780906  511069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 10:51:52.937145  511069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 10:51:52.942655  511069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 10:51:52.942734  511069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 10:51:52.958903  511069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 10:51:52.958933  511069 start.go:495] detecting cgroup driver to use...
	I0414 10:51:52.959018  511069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 10:51:52.974004  511069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 10:51:52.987046  511069 docker.go:217] disabling cri-docker service (if available) ...
	I0414 10:51:52.987120  511069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 10:51:52.999744  511069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 10:51:53.013103  511069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 10:51:53.131007  511069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 10:51:53.271857  511069 docker.go:233] disabling docker service ...
	I0414 10:51:53.271930  511069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 10:51:53.285516  511069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 10:51:53.297581  511069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 10:51:53.429195  511069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 10:51:53.559600  511069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 10:51:53.572838  511069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 10:51:53.590215  511069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 10:51:53.590295  511069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.599869  511069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 10:51:53.599956  511069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.609756  511069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.619318  511069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.628748  511069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 10:51:53.638171  511069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.647100  511069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.662945  511069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 10:51:53.672153  511069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 10:51:53.680641  511069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 10:51:53.680695  511069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 10:51:53.693095  511069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 10:51:53.701934  511069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 10:51:53.835024  511069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 10:51:53.921984  511069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 10:51:53.922095  511069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 10:51:53.926714  511069 start.go:563] Will wait 60s for crictl version
	I0414 10:51:53.926784  511069 ssh_runner.go:195] Run: which crictl
	I0414 10:51:53.930297  511069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 10:51:53.966935  511069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 10:51:53.967057  511069 ssh_runner.go:195] Run: crio --version
	I0414 10:51:53.992395  511069 ssh_runner.go:195] Run: crio --version
	I0414 10:51:54.021114  511069 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 10:51:54.022518  511069 main.go:141] libmachine: (addons-345184) Calling .GetIP
	I0414 10:51:54.025724  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:54.026093  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:51:54.026120  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:51:54.026309  511069 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 10:51:54.030424  511069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 10:51:54.042414  511069 kubeadm.go:883] updating cluster {Name:addons-345184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-345184 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 10:51:54.042540  511069 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 10:51:54.042592  511069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 10:51:54.073333  511069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 10:51:54.073422  511069 ssh_runner.go:195] Run: which lz4
	I0414 10:51:54.077201  511069 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 10:51:54.081143  511069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 10:51:54.081184  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 10:51:55.253912  511069 crio.go:462] duration metric: took 1.176738378s to copy over tarball
	I0414 10:51:55.253989  511069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 10:51:57.363397  511069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.109373346s)
	I0414 10:51:57.363425  511069 crio.go:469] duration metric: took 2.109481921s to extract the tarball
	I0414 10:51:57.363434  511069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 10:51:57.399144  511069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 10:51:57.438741  511069 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 10:51:57.438768  511069 cache_images.go:84] Images are preloaded, skipping loading
	I0414 10:51:57.438777  511069 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.32.2 crio true true} ...
	I0414 10:51:57.438892  511069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-345184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-345184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 10:51:57.438958  511069 ssh_runner.go:195] Run: crio config
	I0414 10:51:57.481160  511069 cni.go:84] Creating CNI manager for ""
	I0414 10:51:57.481189  511069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 10:51:57.481202  511069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 10:51:57.481234  511069 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-345184 NodeName:addons-345184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 10:51:57.481407  511069 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-345184"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 10:51:57.481482  511069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 10:51:57.490706  511069 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 10:51:57.490815  511069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 10:51:57.499416  511069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0414 10:51:57.514861  511069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 10:51:57.531057  511069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0414 10:51:57.546889  511069 ssh_runner.go:195] Run: grep 192.168.39.54	control-plane.minikube.internal$ /etc/hosts
	I0414 10:51:57.550914  511069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 10:51:57.563005  511069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 10:51:57.689588  511069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 10:51:57.705050  511069 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184 for IP: 192.168.39.54
	I0414 10:51:57.705077  511069 certs.go:194] generating shared ca certs ...
	I0414 10:51:57.705106  511069 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:57.705271  511069 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 10:51:58.107455  511069 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt ...
	I0414 10:51:58.107493  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt: {Name:mkbcdc6b1dccac5ae165295f1250d963c4649428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:58.107712  511069 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key ...
	I0414 10:51:58.107733  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key: {Name:mkeac045cb080103d0a321ee32ec6cd64adcdd1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:58.107850  511069 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 10:51:58.489114  511069 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt ...
	I0414 10:51:58.489155  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt: {Name:mk77f5035f7a1c7b832299679733075b69a430f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:58.489334  511069 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key ...
	I0414 10:51:58.489347  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key: {Name:mk609fe32219d4a52005615bc167166a90f8a9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:58.489417  511069 certs.go:256] generating profile certs ...
	I0414 10:51:58.489486  511069 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.key
	I0414 10:51:58.489507  511069 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt with IP's: []
	I0414 10:51:58.761868  511069 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt ...
	I0414 10:51:58.761903  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: {Name:mk3d1d68a6d4c8c6b90d633b80ac1d61ee5d71ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:58.762077  511069 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.key ...
	I0414 10:51:58.762091  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.key: {Name:mk128663af8e7364dfb93f2c3cf2306bad996a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:58.762157  511069 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.key.c1f05725
	I0414 10:51:58.762178  511069 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.crt.c1f05725 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54]
	I0414 10:51:59.094917  511069 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.crt.c1f05725 ...
	I0414 10:51:59.094953  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.crt.c1f05725: {Name:mk64aa8c8fa27d83fcb5f33d261661db63701a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:59.095132  511069 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.key.c1f05725 ...
	I0414 10:51:59.095145  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.key.c1f05725: {Name:mk5038e181c7cef03aedaa1db8cb724c69b3d289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:59.095224  511069 certs.go:381] copying /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.crt.c1f05725 -> /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.crt
	I0414 10:51:59.095311  511069 certs.go:385] copying /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.key.c1f05725 -> /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.key
	I0414 10:51:59.095358  511069 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.key
	I0414 10:51:59.095377  511069 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.crt with IP's: []
	I0414 10:51:59.220960  511069 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.crt ...
	I0414 10:51:59.220993  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.crt: {Name:mk62a6f5c73e18de13e2bc0904142332a2d50ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:59.221156  511069 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.key ...
	I0414 10:51:59.221168  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.key: {Name:mkc37bc5b9420fd1c602a04ad405f460fbd7fd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:51:59.221334  511069 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 10:51:59.221369  511069 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 10:51:59.221393  511069 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 10:51:59.221416  511069 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 10:51:59.222022  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 10:51:59.246951  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 10:51:59.271720  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 10:51:59.297083  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 10:51:59.321192  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 10:51:59.343616  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 10:51:59.366621  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 10:51:59.389180  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 10:51:59.415554  511069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 10:51:59.439991  511069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 10:51:59.458287  511069 ssh_runner.go:195] Run: openssl version
	I0414 10:51:59.464305  511069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 10:51:59.476188  511069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 10:51:59.480472  511069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 10:51:59.480533  511069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 10:51:59.486032  511069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 10:51:59.496401  511069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 10:51:59.500242  511069 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 10:51:59.500304  511069 kubeadm.go:392] StartCluster: {Name:addons-345184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-345184 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 10:51:59.500392  511069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 10:51:59.500453  511069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 10:51:59.542779  511069 cri.go:89] found id: ""
	I0414 10:51:59.542858  511069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 10:51:59.552334  511069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 10:51:59.561232  511069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 10:51:59.571331  511069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 10:51:59.571358  511069 kubeadm.go:157] found existing configuration files:
	
	I0414 10:51:59.571404  511069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 10:51:59.579906  511069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 10:51:59.579972  511069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 10:51:59.589042  511069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 10:51:59.597521  511069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 10:51:59.597591  511069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 10:51:59.606763  511069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 10:51:59.616565  511069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 10:51:59.616626  511069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 10:51:59.625750  511069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 10:51:59.634524  511069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 10:51:59.634583  511069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 10:51:59.643743  511069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 10:51:59.695577  511069 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 10:51:59.695695  511069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 10:51:59.802233  511069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 10:51:59.802402  511069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 10:51:59.802543  511069 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 10:51:59.810025  511069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 10:51:59.985759  511069 out.go:235]   - Generating certificates and keys ...
	I0414 10:51:59.985894  511069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 10:51:59.985978  511069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 10:52:00.153954  511069 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 10:52:00.408372  511069 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 10:52:00.528826  511069 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 10:52:00.741051  511069 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 10:52:00.863553  511069 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 10:52:00.863774  511069 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-345184 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0414 10:52:01.112380  511069 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 10:52:01.112567  511069 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-345184 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0414 10:52:01.163594  511069 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 10:52:01.457237  511069 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 10:52:01.896785  511069 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 10:52:01.896921  511069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 10:52:02.242412  511069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 10:52:02.291223  511069 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 10:52:02.457927  511069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 10:52:02.546772  511069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 10:52:02.610456  511069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 10:52:02.613194  511069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 10:52:02.615583  511069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 10:52:02.617366  511069 out.go:235]   - Booting up control plane ...
	I0414 10:52:02.617504  511069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 10:52:02.617597  511069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 10:52:02.618025  511069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 10:52:02.632367  511069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 10:52:02.640431  511069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 10:52:02.640493  511069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 10:52:02.762631  511069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 10:52:02.762814  511069 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 10:52:03.263469  511069 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.877873ms
	I0414 10:52:03.263558  511069 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 10:52:07.762971  511069 kubeadm.go:310] [api-check] The API server is healthy after 4.501912122s
	I0414 10:52:07.776741  511069 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 10:52:07.791058  511069 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 10:52:07.825891  511069 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 10:52:07.826140  511069 kubeadm.go:310] [mark-control-plane] Marking the node addons-345184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 10:52:07.836037  511069 kubeadm.go:310] [bootstrap-token] Using token: zb0c6y.tzyedzw8vy474436
	I0414 10:52:07.837384  511069 out.go:235]   - Configuring RBAC rules ...
	I0414 10:52:07.837552  511069 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 10:52:07.841187  511069 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 10:52:07.851141  511069 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 10:52:07.853969  511069 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 10:52:07.856686  511069 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 10:52:07.859515  511069 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 10:52:08.170433  511069 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 10:52:08.605859  511069 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 10:52:09.170188  511069 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 10:52:09.171788  511069 kubeadm.go:310] 
	I0414 10:52:09.171870  511069 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 10:52:09.171897  511069 kubeadm.go:310] 
	I0414 10:52:09.172040  511069 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 10:52:09.172051  511069 kubeadm.go:310] 
	I0414 10:52:09.172084  511069 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 10:52:09.172159  511069 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 10:52:09.172258  511069 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 10:52:09.172282  511069 kubeadm.go:310] 
	I0414 10:52:09.172349  511069 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 10:52:09.172360  511069 kubeadm.go:310] 
	I0414 10:52:09.172418  511069 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 10:52:09.172428  511069 kubeadm.go:310] 
	I0414 10:52:09.172493  511069 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 10:52:09.172615  511069 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 10:52:09.172716  511069 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 10:52:09.172733  511069 kubeadm.go:310] 
	I0414 10:52:09.172837  511069 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 10:52:09.172952  511069 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 10:52:09.172962  511069 kubeadm.go:310] 
	I0414 10:52:09.173085  511069 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zb0c6y.tzyedzw8vy474436 \
	I0414 10:52:09.173236  511069 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:218652e93704fc369ec14e3a4540532c3ba9e337011061ef10cc8e1465907a51 \
	I0414 10:52:09.173267  511069 kubeadm.go:310] 	--control-plane 
	I0414 10:52:09.173277  511069 kubeadm.go:310] 
	I0414 10:52:09.173406  511069 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 10:52:09.173423  511069 kubeadm.go:310] 
	I0414 10:52:09.173561  511069 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zb0c6y.tzyedzw8vy474436 \
	I0414 10:52:09.173699  511069 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:218652e93704fc369ec14e3a4540532c3ba9e337011061ef10cc8e1465907a51 
	I0414 10:52:09.175099  511069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 10:52:09.175133  511069 cni.go:84] Creating CNI manager for ""
	I0414 10:52:09.175143  511069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 10:52:09.177137  511069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 10:52:09.178475  511069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 10:52:09.190250  511069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 10:52:09.210002  511069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 10:52:09.210094  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:09.210107  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-345184 minikube.k8s.io/updated_at=2025_04_14T10_52_09_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4 minikube.k8s.io/name=addons-345184 minikube.k8s.io/primary=true
	I0414 10:52:09.358494  511069 ops.go:34] apiserver oom_adj: -16
	I0414 10:52:09.358653  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:09.858706  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:10.359518  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:10.859575  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:11.359197  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:11.859485  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:12.359557  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:12.859015  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:13.358831  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:13.858887  511069 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 10:52:13.970765  511069 kubeadm.go:1113] duration metric: took 4.760752451s to wait for elevateKubeSystemPrivileges
	I0414 10:52:13.970819  511069 kubeadm.go:394] duration metric: took 14.47052017s to StartCluster
	I0414 10:52:13.970844  511069 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:52:13.970993  511069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 10:52:13.971617  511069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 10:52:13.971826  511069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 10:52:13.971860  511069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 10:52:13.971932  511069 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 10:52:13.972071  511069 addons.go:69] Setting yakd=true in profile "addons-345184"
	I0414 10:52:13.972081  511069 addons.go:69] Setting inspektor-gadget=true in profile "addons-345184"
	I0414 10:52:13.972093  511069 config.go:182] Loaded profile config "addons-345184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 10:52:13.972104  511069 addons.go:69] Setting metrics-server=true in profile "addons-345184"
	I0414 10:52:13.972111  511069 addons.go:238] Setting addon inspektor-gadget=true in "addons-345184"
	I0414 10:52:13.972115  511069 addons.go:238] Setting addon metrics-server=true in "addons-345184"
	I0414 10:52:13.972114  511069 addons.go:69] Setting default-storageclass=true in profile "addons-345184"
	I0414 10:52:13.972096  511069 addons.go:238] Setting addon yakd=true in "addons-345184"
	I0414 10:52:13.972139  511069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-345184"
	I0414 10:52:13.972124  511069 addons.go:69] Setting ingress=true in profile "addons-345184"
	I0414 10:52:13.972150  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972158  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972162  511069 addons.go:69] Setting ingress-dns=true in profile "addons-345184"
	I0414 10:52:13.972168  511069 addons.go:69] Setting gcp-auth=true in profile "addons-345184"
	I0414 10:52:13.972165  511069 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-345184"
	I0414 10:52:13.972178  511069 addons.go:238] Setting addon ingress-dns=true in "addons-345184"
	I0414 10:52:13.972186  511069 mustload.go:65] Loading cluster: addons-345184
	I0414 10:52:13.972189  511069 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-345184"
	I0414 10:52:13.972207  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972222  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972288  511069 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-345184"
	I0414 10:52:13.972298  511069 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-345184"
	I0414 10:52:13.972349  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972365  511069 config.go:182] Loaded profile config "addons-345184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 10:52:13.972633  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.972643  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.972648  511069 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-345184"
	I0414 10:52:13.972648  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.972669  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.972683  511069 addons.go:69] Setting registry=true in profile "addons-345184"
	I0414 10:52:13.972687  511069 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-345184"
	I0414 10:52:13.972695  511069 addons.go:238] Setting addon registry=true in "addons-345184"
	I0414 10:52:13.972702  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.972710  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972714  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972739  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.972755  511069 addons.go:69] Setting volcano=true in profile "addons-345184"
	I0414 10:52:13.972770  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.972778  511069 addons.go:238] Setting addon volcano=true in "addons-345184"
	I0414 10:52:13.972788  511069 addons.go:69] Setting storage-provisioner=true in profile "addons-345184"
	I0414 10:52:13.972797  511069 addons.go:238] Setting addon storage-provisioner=true in "addons-345184"
	I0414 10:52:13.972631  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.972839  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.972848  511069 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-345184"
	I0414 10:52:13.972861  511069 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-345184"
	I0414 10:52:13.972157  511069 addons.go:238] Setting addon ingress=true in "addons-345184"
	I0414 10:52:13.972891  511069 addons.go:69] Setting volumesnapshots=true in profile "addons-345184"
	I0414 10:52:13.972902  511069 addons.go:238] Setting addon volumesnapshots=true in "addons-345184"
	I0414 10:52:13.972684  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.972141  511069 addons.go:69] Setting cloud-spanner=true in profile "addons-345184"
	I0414 10:52:13.972960  511069 addons.go:238] Setting addon cloud-spanner=true in "addons-345184"
	I0414 10:52:13.972150  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.972692  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.973134  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.973202  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.973228  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.973237  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.973249  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.973283  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.973307  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.973353  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.973386  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.973397  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.973420  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.973436  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.973456  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.973618  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.973644  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.973709  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:13.974299  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.974328  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.974335  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.974360  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.974398  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:13.974426  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:13.975932  511069 out.go:177] * Verifying Kubernetes components...
	I0414 10:52:13.977432  511069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 10:52:13.994148  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0414 10:52:13.994172  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
	I0414 10:52:13.994359  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
	I0414 10:52:13.994488  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0414 10:52:13.995108  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:13.995133  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:13.995117  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:13.995242  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0414 10:52:13.995798  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:13.995823  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:13.995864  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:13.995890  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:13.995953  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:13.995954  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:13.996099  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:13.996117  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:13.996429  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:13.996455  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:13.996492  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:13.996530  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I0414 10:52:13.996520  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:13.996665  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:13.996688  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:13.997008  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:13.997070  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:13.997245  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:13.998837  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.000713  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:14.005048  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.005106  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.005215  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.005256  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.005429  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.005464  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.005764  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.005798  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.008862  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.008923  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.009421  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.009465  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.009916  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.009969  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.014068  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.014899  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.014924  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.015402  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.016196  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.016241  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.026930  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I0414 10:52:14.027778  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.028406  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.028429  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.028846  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.029450  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.029489  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.035961  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0414 10:52:14.037096  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.038029  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.038051  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.038680  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.038758  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0414 10:52:14.039270  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.039903  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.039923  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.040356  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.040967  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.041008  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.041917  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0414 10:52:14.042511  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.043159  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.043177  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.043710  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.043756  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.044325  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.044905  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.044950  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.052165  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0414 10:52:14.053159  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.053828  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.053853  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.054055  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0414 10:52:14.054261  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.054491  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.054828  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.055416  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.055435  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.055728  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0414 10:52:14.056503  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.057232  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.057248  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.057675  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.057910  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.058489  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0414 10:52:14.058666  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0414 10:52:14.058681  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I0414 10:52:14.059085  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.059206  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.059534  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.059557  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.059831  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.059850  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.059943  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.060114  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.060258  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.060347  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.060968  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.060991  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.061456  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.061491  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.061731  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.061809  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.062505  511069 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-345184"
	I0414 10:52:14.062567  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:14.062846  511069 addons.go:238] Setting addon default-storageclass=true in "addons-345184"
	I0414 10:52:14.062888  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:14.063084  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.063144  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.063235  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.063277  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.063604  511069 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0414 10:52:14.063847  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.064368  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.064419  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.065013  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.065048  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.066140  511069 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 10:52:14.066160  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 10:52:14.066180  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.069840  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.070369  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.070400  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.070554  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.070748  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.070843  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.070960  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.076202  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0414 10:52:14.076778  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.077430  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.077449  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.077925  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.078150  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.079984  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40223
	I0414 10:52:14.080637  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.080750  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0414 10:52:14.081855  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.081873  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.082380  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.082718  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.083209  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.083225  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.083633  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.083783  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.087809  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.089528  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0414 10:52:14.089965  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.090144  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.091077  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.091103  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.091561  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.091697  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.091861  511069 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 10:52:14.093166  511069 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 10:52:14.093194  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 10:52:14.093221  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.094052  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.096237  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.096689  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.096853  511069 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 10:52:14.097375  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.097416  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.097700  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.097988  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.098803  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.098902  511069 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 10:52:14.099017  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.099918  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0414 10:52:14.100247  511069 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 10:52:14.100314  511069 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 10:52:14.100325  511069 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 10:52:14.100347  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.100676  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.101333  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.101353  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.101617  511069 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 10:52:14.101641  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 10:52:14.101650  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0414 10:52:14.101660  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.102058  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.102340  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.102471  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I0414 10:52:14.103055  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.103096  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.103455  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0414 10:52:14.103664  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.103678  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.104121  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.104209  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.104421  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.104851  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.104871  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.105302  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.105543  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.106539  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.106712  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.107049  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.107682  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.107751  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.107798  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.107705  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.107910  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
	I0414 10:52:14.107940  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.107911  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37365
	I0414 10:52:14.108175  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.108369  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.108538  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.108570  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.108623  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.108579  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.108798  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.108859  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.108901  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.109217  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.109272  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.109560  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.109574  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.109633  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.109678  511069 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0414 10:52:14.109830  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.109914  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.110043  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.110252  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.110270  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.110338  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.110757  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.110799  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.110963  511069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 10:52:14.111524  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.111706  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.111882  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.112302  511069 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 10:52:14.112404  511069 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 10:52:14.112418  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 10:52:14.112478  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.113814  511069 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 10:52:14.113828  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.113925  511069 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 10:52:14.115226  511069 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 10:52:14.115246  511069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 10:52:14.115453  511069 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 10:52:14.115736  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.116090  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0414 10:52:14.116616  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.117078  511069 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 10:52:14.117097  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 10:52:14.117115  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.117907  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.118056  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.118079  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.118955  511069 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 10:52:14.118982  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 10:52:14.119003  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.120789  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.121380  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.121542  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.121974  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.123443  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.125013  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.125099  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.125114  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.125143  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.125866  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.126044  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0414 10:52:14.127282  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0414 10:52:14.127986  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0414 10:52:14.128122  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.128219  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.128245  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.128521  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.128665  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.128883  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.129010  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.129049  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.129279  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.129615  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.129745  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.130053  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.130130  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.130504  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.130538  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.130839  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0414 10:52:14.131527  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.131626  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.131998  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.132376  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.132464  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.132501  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.132599  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.133080  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.133125  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.133177  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.134399  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.134635  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.134703  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.135042  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.135945  511069 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0414 10:52:14.136271  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.135799  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.136811  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.136840  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.137050  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.137294  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.137369  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.137506  511069 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 10:52:14.137526  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 10:52:14.137535  511069 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 10:52:14.137309  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.137590  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.137549  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.137648  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.137844  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:14.137858  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:14.138805  511069 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 10:52:14.138828  511069 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 10:52:14.138853  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.139758  511069 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0414 10:52:14.140218  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:14.140227  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:14.140240  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:14.140251  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:14.140257  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:14.140263  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.140364  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0414 10:52:14.141121  511069 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 10:52:14.141139  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 10:52:14.141329  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.143140  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.143144  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.143177  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.143198  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.143225  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:14.143260  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:14.143269  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:14.143280  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	W0414 10:52:14.143412  511069 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 10:52:14.143559  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:14.143575  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.143587  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:14.143648  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.143662  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.143691  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.143833  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.144010  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.144080  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.144205  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.144218  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.144280  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.144426  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.144515  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.144568  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.144822  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.144638  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.145128  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.147031  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.147438  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.147875  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.147908  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.148094  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.148270  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.148469  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.148500  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0414 10:52:14.148643  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 10:52:14.148686  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.149031  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:14.149471  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.149492  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.150030  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.150223  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.151148  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 10:52:14.151879  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.153477  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 10:52:14.153506  511069 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 10:52:14.154791  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 10:52:14.154835  511069 out.go:177]   - Using image docker.io/busybox:stable
	I0414 10:52:14.156019  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 10:52:14.156080  511069 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 10:52:14.156100  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 10:52:14.156133  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.158524  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 10:52:14.159602  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.160052  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.160077  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.160261  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.160433  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.160571  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.160688  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.160939  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 10:52:14.162038  511069 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 10:52:14.163135  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 10:52:14.163150  511069 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 10:52:14.163167  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.166370  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.166753  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.166824  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.166932  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.167119  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.167356  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.167494  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.169453  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I0414 10:52:14.169791  511069 main.go:141] libmachine: () Calling .GetVersion
	W0414 10:52:14.169935  511069 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59082->192.168.39.54:22: read: connection reset by peer
	I0414 10:52:14.169967  511069 retry.go:31] will retry after 345.766459ms: ssh: handshake failed: read tcp 192.168.39.1:59082->192.168.39.54:22: read: connection reset by peer
	I0414 10:52:14.170300  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:14.170319  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:14.170678  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:14.170889  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:14.172413  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:14.172637  511069 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 10:52:14.172651  511069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 10:52:14.172668  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:14.175077  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.175479  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:14.175502  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:14.175633  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:14.175943  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:14.176081  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:14.176620  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:14.517512  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 10:52:14.527991  511069 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 10:52:14.528015  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 10:52:14.530003  511069 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 10:52:14.530022  511069 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 10:52:14.574740  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 10:52:14.588196  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 10:52:14.634262  511069 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 10:52:14.634293  511069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 10:52:14.646296  511069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 10:52:14.646312  511069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 10:52:14.652740  511069 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 10:52:14.652768  511069 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 10:52:14.666526  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 10:52:14.679576  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 10:52:14.708969  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 10:52:14.739969  511069 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 10:52:14.739994  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 10:52:14.748941  511069 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 10:52:14.748964  511069 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 10:52:14.779530  511069 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 10:52:14.779565  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 10:52:14.779662  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 10:52:14.780896  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 10:52:14.818932  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 10:52:14.818968  511069 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 10:52:14.858942  511069 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 10:52:14.858974  511069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 10:52:14.912808  511069 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 10:52:14.912852  511069 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 10:52:14.924484  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 10:52:14.957296  511069 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 10:52:14.957331  511069 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 10:52:14.982473  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 10:52:15.020621  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 10:52:15.066738  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 10:52:15.066775  511069 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 10:52:15.134677  511069 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 10:52:15.134718  511069 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 10:52:15.216844  511069 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 10:52:15.216883  511069 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 10:52:15.351202  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 10:52:15.351241  511069 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 10:52:15.356751  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 10:52:15.356786  511069 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 10:52:15.447847  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 10:52:15.447888  511069 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 10:52:15.449193  511069 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 10:52:15.449218  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 10:52:15.531688  511069 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 10:52:15.531722  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 10:52:15.542085  511069 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 10:52:15.542120  511069 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 10:52:15.616809  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 10:52:15.887954  511069 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 10:52:15.887987  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 10:52:15.891991  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 10:52:16.109591  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592030408s)
	I0414 10:52:16.109676  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:16.109693  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:16.110108  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:16.110126  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:16.110136  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:16.110145  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:16.110155  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:16.110397  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:16.110412  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:16.115480  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:16.115502  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:16.115890  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:16.115914  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:16.115912  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:16.146238  511069 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 10:52:16.146270  511069 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 10:52:16.414224  511069 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 10:52:16.414255  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 10:52:16.682603  511069 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 10:52:16.682641  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 10:52:16.888100  511069 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 10:52:16.888139  511069 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 10:52:17.302637  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 10:52:20.893359  511069 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 10:52:20.893415  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:20.896860  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:20.897353  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:20.897381  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:20.897573  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:20.897802  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:20.897962  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:20.898156  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:21.226145  511069 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 10:52:21.283797  511069 addons.go:238] Setting addon gcp-auth=true in "addons-345184"
	I0414 10:52:21.283866  511069 host.go:66] Checking if "addons-345184" exists ...
	I0414 10:52:21.284200  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:21.284239  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:21.300815  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0414 10:52:21.301429  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:21.301949  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:21.301980  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:21.302385  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:21.302876  511069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 10:52:21.302919  511069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 10:52:21.318770  511069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0414 10:52:21.319268  511069 main.go:141] libmachine: () Calling .GetVersion
	I0414 10:52:21.319782  511069 main.go:141] libmachine: Using API Version  1
	I0414 10:52:21.319812  511069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 10:52:21.320231  511069 main.go:141] libmachine: () Calling .GetMachineName
	I0414 10:52:21.320433  511069 main.go:141] libmachine: (addons-345184) Calling .GetState
	I0414 10:52:21.322081  511069 main.go:141] libmachine: (addons-345184) Calling .DriverName
	I0414 10:52:21.322296  511069 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 10:52:21.322318  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHHostname
	I0414 10:52:21.325094  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:21.325520  511069 main.go:141] libmachine: (addons-345184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:2b", ip: ""} in network mk-addons-345184: {Iface:virbr1 ExpiryTime:2025-04-14 11:51:41 +0000 UTC Type:0 Mac:52:54:00:7e:43:2b Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:addons-345184 Clientid:01:52:54:00:7e:43:2b}
	I0414 10:52:21.325568  511069 main.go:141] libmachine: (addons-345184) DBG | domain addons-345184 has defined IP address 192.168.39.54 and MAC address 52:54:00:7e:43:2b in network mk-addons-345184
	I0414 10:52:21.325707  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHPort
	I0414 10:52:21.325916  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHKeyPath
	I0414 10:52:21.326081  511069 main.go:141] libmachine: (addons-345184) Calling .GetSSHUsername
	I0414 10:52:21.326241  511069 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/addons-345184/id_rsa Username:docker}
	I0414 10:52:22.163272  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.575032484s)
	I0414 10:52:22.163338  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163351  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163351  511069 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.517007222s)
	I0414 10:52:22.163419  511069 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.51708623s)
	I0414 10:52:22.163448  511069 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 10:52:22.163355  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.588582386s)
	I0414 10:52:22.163483  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.483886549s)
	I0414 10:52:22.163503  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163511  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163508  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163525  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163531  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.45452841s)
	I0414 10:52:22.163464  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.496913329s)
	I0414 10:52:22.163569  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163576  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163582  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163592  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163618  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.163632  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.163639  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.163647  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163663  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163698  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.384009557s)
	I0414 10:52:22.163719  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163727  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163801  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.382885717s)
	I0414 10:52:22.163817  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163824  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.163943  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.239433386s)
	I0414 10:52:22.163967  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.163979  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164048  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.181539816s)
	I0414 10:52:22.164065  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164074  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164077  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164106  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164137  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164144  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164151  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164158  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164159  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.143502108s)
	I0414 10:52:22.164172  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164180  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164249  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.547408985s)
	I0414 10:52:22.164262  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164270  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164287  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164296  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164298  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164303  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164310  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164344  511069 node_ready.go:35] waiting up to 6m0s for node "addons-345184" to be "Ready" ...
	I0414 10:52:22.164523  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164559  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164578  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164584  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164591  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164597  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164616  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164650  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164658  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164666  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164669  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164673  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.164693  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164701  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164711  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.272682578s)
	W0414 10:52:22.164740  511069 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 10:52:22.164759  511069 retry.go:31] will retry after 312.228035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 10:52:22.164790  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164812  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164818  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164900  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164934  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.164952  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.164958  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.164966  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.164972  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.165259  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.165284  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.165290  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.165309  511069 addons.go:479] Verifying addon metrics-server=true in "addons-345184"
	I0414 10:52:22.165361  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.165373  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.165468  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.165489  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.165495  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.165502  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.165507  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.166061  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.166071  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.166078  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.166085  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.166136  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.166160  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.166185  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.166186  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.166191  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.166196  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.166199  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.166204  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.166206  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.166210  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.166556  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.166584  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.166590  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.166713  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.166739  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.166746  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.167093  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.167130  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.167136  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.168270  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.168286  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.168294  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.168301  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.168490  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.168527  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.168538  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.168623  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.168663  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.168673  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.168683  511069 addons.go:479] Verifying addon ingress=true in "addons-345184"
	I0414 10:52:22.169069  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.169097  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.169104  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.170435  511069 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-345184 service yakd-dashboard -n yakd-dashboard
	
	I0414 10:52:22.170445  511069 out.go:177] * Verifying ingress addon...
	I0414 10:52:22.170584  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.170617  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.170635  511069 addons.go:479] Verifying addon registry=true in "addons-345184"
	I0414 10:52:22.172019  511069 out.go:177] * Verifying registry addon...
	I0414 10:52:22.172695  511069 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 10:52:22.173972  511069 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 10:52:22.178066  511069 node_ready.go:49] node "addons-345184" has status "Ready":"True"
	I0414 10:52:22.178093  511069 node_ready.go:38] duration metric: took 13.728266ms for node "addons-345184" to be "Ready" ...
	I0414 10:52:22.178104  511069 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 10:52:22.184135  511069 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 10:52:22.184159  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:22.184191  511069 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 10:52:22.184215  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:22.184778  511069 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rz6f4" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:22.206378  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:22.206414  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:22.206757  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:22.206781  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:22.206820  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:22.477270  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 10:52:22.672785  511069 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-345184" context rescaled to 1 replicas
	I0414 10:52:22.679397  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:22.682577  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:23.150568  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.847858456s)
	I0414 10:52:23.150604  511069 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.828280985s)
	I0414 10:52:23.150640  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:23.150657  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:23.150935  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:23.150951  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:23.150961  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:23.150967  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:23.151337  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:23.151356  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:23.151355  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:23.151366  511069 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-345184"
	I0414 10:52:23.152117  511069 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 10:52:23.152879  511069 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 10:52:23.154357  511069 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 10:52:23.154974  511069 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 10:52:23.155385  511069 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 10:52:23.155403  511069 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 10:52:23.157648  511069 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 10:52:23.157665  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:23.185651  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:23.190553  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:23.263267  511069 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 10:52:23.263315  511069 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 10:52:23.371350  511069 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 10:52:23.371375  511069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 10:52:23.439586  511069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 10:52:23.662377  511069 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 10:52:23.662402  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:23.679415  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:23.680402  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:24.091562  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.614225722s)
	I0414 10:52:24.091622  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:24.091639  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:24.091954  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:24.091976  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:24.091986  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:24.091995  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:24.092299  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:24.092325  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:24.092332  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:24.162929  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:24.179721  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:24.180439  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:24.190523  511069 pod_ready.go:103] pod "amd-gpu-device-plugin-rz6f4" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:24.646045  511069 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.206407314s)
	I0414 10:52:24.646097  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:24.646110  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:24.646409  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:24.646427  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:24.646436  511069 main.go:141] libmachine: Making call to close driver server
	I0414 10:52:24.646442  511069 main.go:141] libmachine: (addons-345184) Calling .Close
	I0414 10:52:24.646474  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:24.646692  511069 main.go:141] libmachine: (addons-345184) DBG | Closing plugin on server side
	I0414 10:52:24.646737  511069 main.go:141] libmachine: Successfully made call to close driver server
	I0414 10:52:24.646751  511069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 10:52:24.647734  511069 addons.go:479] Verifying addon gcp-auth=true in "addons-345184"
	I0414 10:52:24.649472  511069 out.go:177] * Verifying gcp-auth addon...
	I0414 10:52:24.651560  511069 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 10:52:24.660249  511069 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 10:52:24.660275  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:24.666691  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:24.696683  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:24.697207  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:25.155874  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:25.160057  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:25.176224  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:25.177391  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:25.197351  511069 pod_ready.go:93] pod "amd-gpu-device-plugin-rz6f4" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.197376  511069 pod_ready.go:82] duration metric: took 3.012561743s for pod "amd-gpu-device-plugin-rz6f4" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.197397  511069 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nkg97" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.202472  511069 pod_ready.go:93] pod "coredns-668d6bf9bc-nkg97" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.202494  511069 pod_ready.go:82] duration metric: took 5.089253ms for pod "coredns-668d6bf9bc-nkg97" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.202506  511069 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rsmkj" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.208585  511069 pod_ready.go:93] pod "coredns-668d6bf9bc-rsmkj" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.208606  511069 pod_ready.go:82] duration metric: took 6.092909ms for pod "coredns-668d6bf9bc-rsmkj" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.208617  511069 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.215221  511069 pod_ready.go:93] pod "etcd-addons-345184" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.215245  511069 pod_ready.go:82] duration metric: took 6.620937ms for pod "etcd-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.215255  511069 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.221825  511069 pod_ready.go:93] pod "kube-apiserver-addons-345184" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.221848  511069 pod_ready.go:82] duration metric: took 6.58468ms for pod "kube-apiserver-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.221861  511069 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.588796  511069 pod_ready.go:93] pod "kube-controller-manager-addons-345184" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.588829  511069 pod_ready.go:82] duration metric: took 366.960215ms for pod "kube-controller-manager-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.588846  511069 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4w7ch" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.654815  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:25.657925  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:25.676580  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:25.677588  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:25.988514  511069 pod_ready.go:93] pod "kube-proxy-4w7ch" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:25.988545  511069 pod_ready.go:82] duration metric: took 399.689387ms for pod "kube-proxy-4w7ch" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:25.988559  511069 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:26.154920  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:26.158138  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:26.175794  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:26.177415  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:26.389112  511069 pod_ready.go:93] pod "kube-scheduler-addons-345184" in "kube-system" namespace has status "Ready":"True"
	I0414 10:52:26.389147  511069 pod_ready.go:82] duration metric: took 400.579025ms for pod "kube-scheduler-addons-345184" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:26.389164  511069 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace to be "Ready" ...
	I0414 10:52:26.655098  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:26.658440  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:26.676626  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:26.677601  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:27.156491  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:27.158809  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:27.257466  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:27.257508  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:27.654922  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:27.658760  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:27.677338  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:27.678432  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:28.155629  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:28.158495  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:28.176494  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:28.177826  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:28.395337  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:28.660181  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:28.660765  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:28.677254  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:28.677432  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:29.155743  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:29.157947  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:29.175181  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:29.178055  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:29.724340  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:29.724436  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:29.724581  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:29.725808  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:30.155105  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:30.157572  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:30.176167  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:30.177637  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:30.396209  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:30.655203  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:30.657217  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:30.675965  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:30.676961  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:31.154797  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:31.158449  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:31.176243  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:31.177510  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:31.655322  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:31.657382  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:31.676352  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:31.677022  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:32.155062  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:32.157332  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:32.176731  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:32.177070  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:32.655168  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:32.657748  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:32.677550  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:32.677711  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:32.894978  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:33.155044  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:33.158165  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:33.175705  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:33.176501  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:33.654495  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:33.657687  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:33.676693  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:33.677565  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:34.158641  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:34.159476  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:34.177112  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:34.177530  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:34.655481  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:34.657719  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:34.677655  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:34.677742  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:35.155419  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:35.157328  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:35.177946  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:35.178403  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:35.395579  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:35.654754  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:35.657727  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:35.676401  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:35.677025  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:36.156995  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:36.160412  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:36.179150  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:36.179267  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:36.654898  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:36.658268  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:36.676096  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:36.676487  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:37.155631  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:37.157920  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:37.175477  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:37.177103  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:37.655118  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:37.658008  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:37.675587  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:37.677523  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:37.916104  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:38.155956  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:38.159383  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:38.175741  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:38.176533  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:38.656335  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:38.658687  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:38.677615  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:38.679719  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:39.155672  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:39.158386  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:39.176319  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:39.176645  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:39.655419  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:39.657927  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:39.677967  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:39.678192  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:40.155298  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:40.157587  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:40.180118  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:40.184386  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:40.394538  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:40.654851  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:40.660450  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:40.675566  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:40.677279  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:41.155020  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:41.157381  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:41.176520  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:41.177608  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:41.655709  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:41.657728  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:41.676304  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:41.677391  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:42.155366  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:42.157661  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:42.176373  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:42.178225  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:42.394762  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:42.654657  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:42.658176  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:42.676667  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:42.677579  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:43.154980  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:43.157644  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:43.176199  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:43.176619  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:43.655135  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:43.657950  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:43.676740  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:43.676828  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:44.155030  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:44.157297  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:44.176105  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:44.177881  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:44.395471  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:44.655414  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:44.657823  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:44.677166  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:44.677379  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:45.154674  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:45.157981  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:45.175448  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:45.177055  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:45.911914  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:45.912179  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:45.912715  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:45.912724  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:46.154525  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:46.158074  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:46.175451  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:46.176814  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:46.654619  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:46.658327  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:46.677144  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:46.677247  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:46.893853  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:47.155108  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:47.157900  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:47.176020  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:47.176834  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:47.654780  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:47.658126  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:47.675549  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:47.678424  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:48.156518  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:48.161668  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:48.177483  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:48.177641  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:48.655438  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:48.657539  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:48.676975  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:48.677749  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:48.894091  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:49.156658  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:49.158316  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:49.176388  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:49.177268  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:49.654456  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:49.657828  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:49.676629  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:49.676649  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:50.154952  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:50.158035  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:50.176775  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:50.176839  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:50.655413  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:50.658210  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:50.677347  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:50.677660  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:50.904126  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:51.155908  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:51.157977  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:51.175758  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:51.177445  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:51.655381  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:51.657569  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:51.676516  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:51.676977  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:52.156626  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:52.158800  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:52.175983  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:52.177250  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:52.654678  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:52.658350  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:52.675859  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:52.676713  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:53.154485  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:53.158328  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:53.470996  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:53.471253  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:53.473968  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:53.655094  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:53.660617  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:53.679278  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:53.679490  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:54.154991  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:54.157337  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:54.175684  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:54.178087  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:54.654988  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:54.657200  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:54.675970  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:54.676601  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:55.154773  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:55.158478  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:55.177687  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:55.178372  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:55.655208  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:55.657406  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:55.676175  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:55.681658  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:55.896298  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:56.155605  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:56.157665  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:56.176167  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:56.177567  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:56.654745  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:56.657676  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:56.676651  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:56.677191  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:57.155972  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:57.159011  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:57.176139  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:57.176694  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:57.655588  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:57.657654  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:57.676813  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:57.677368  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:58.158848  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:58.158850  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:58.175337  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:58.176965  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:58.394959  511069 pod_ready.go:103] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"False"
	I0414 10:52:58.655516  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:58.657733  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:58.676116  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:58.677523  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:59.154587  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:59.157891  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:59.178063  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:52:59.178425  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:59.655801  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:52:59.657950  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:52:59.675795  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:52:59.677548  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:00.154974  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:00.157276  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:00.175864  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:00.176864  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:00.395283  511069 pod_ready.go:93] pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace has status "Ready":"True"
	I0414 10:53:00.395331  511069 pod_ready.go:82] duration metric: took 34.006156846s for pod "metrics-server-7fbb699795-f2z9r" in "kube-system" namespace to be "Ready" ...
	I0414 10:53:00.395344  511069 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9t2t4" in "kube-system" namespace to be "Ready" ...
	I0414 10:53:00.401618  511069 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9t2t4" in "kube-system" namespace has status "Ready":"True"
	I0414 10:53:00.401640  511069 pod_ready.go:82] duration metric: took 6.290402ms for pod "nvidia-device-plugin-daemonset-9t2t4" in "kube-system" namespace to be "Ready" ...
	I0414 10:53:00.401656  511069 pod_ready.go:39] duration metric: took 38.223535161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 10:53:00.401685  511069 api_server.go:52] waiting for apiserver process to appear ...
	I0414 10:53:00.401741  511069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 10:53:00.449702  511069 api_server.go:72] duration metric: took 46.477801232s to wait for apiserver process to appear ...
	I0414 10:53:00.449732  511069 api_server.go:88] waiting for apiserver healthz status ...
	I0414 10:53:00.449760  511069 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0414 10:53:00.456613  511069 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0414 10:53:00.457942  511069 api_server.go:141] control plane version: v1.32.2
	I0414 10:53:00.457974  511069 api_server.go:131] duration metric: took 8.232768ms to wait for apiserver health ...
	I0414 10:53:00.457985  511069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 10:53:00.462466  511069 system_pods.go:59] 18 kube-system pods found
	I0414 10:53:00.462495  511069 system_pods.go:61] "amd-gpu-device-plugin-rz6f4" [7b1b5bbf-c56d-47ff-9cae-44382a3f4c1a] Running
	I0414 10:53:00.462500  511069 system_pods.go:61] "coredns-668d6bf9bc-rsmkj" [464a4bac-5002-4d62-9428-7200ff295629] Running
	I0414 10:53:00.462508  511069 system_pods.go:61] "csi-hostpath-attacher-0" [2055dbd3-81e9-4d16-841c-a4c55a65ca7f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 10:53:00.462513  511069 system_pods.go:61] "csi-hostpath-resizer-0" [dfc3094a-97b3-4658-8745-675610948bc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 10:53:00.462522  511069 system_pods.go:61] "csi-hostpathplugin-ls4jr" [296e68ad-efd8-4024-9ed3-4970f05851b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 10:53:00.462527  511069 system_pods.go:61] "etcd-addons-345184" [8f35be97-bcf0-40e7-9ad0-635a6ba73641] Running
	I0414 10:53:00.462531  511069 system_pods.go:61] "kube-apiserver-addons-345184" [4b4ab6b9-d136-473e-acbd-ad9e7f0a1b8e] Running
	I0414 10:53:00.462534  511069 system_pods.go:61] "kube-controller-manager-addons-345184" [729f1dca-77f4-4f3e-94ae-ea3f91438679] Running
	I0414 10:53:00.462541  511069 system_pods.go:61] "kube-ingress-dns-minikube" [698943a5-f0e3-484a-932b-4d1e94eeb773] Running
	I0414 10:53:00.462544  511069 system_pods.go:61] "kube-proxy-4w7ch" [57644013-4e0e-4c0b-a866-10b409f96e8b] Running
	I0414 10:53:00.462550  511069 system_pods.go:61] "kube-scheduler-addons-345184" [6da1733c-241d-478b-a7fc-c3fefe53d9a3] Running
	I0414 10:53:00.462553  511069 system_pods.go:61] "metrics-server-7fbb699795-f2z9r" [dd686ce1-d4c9-4679-b521-c5a1faaf9cb3] Running
	I0414 10:53:00.462557  511069 system_pods.go:61] "nvidia-device-plugin-daemonset-9t2t4" [4dd6cb16-270e-4be0-b1ed-35041c492ac3] Running
	I0414 10:53:00.462560  511069 system_pods.go:61] "registry-6c88467877-mhchq" [83fd759f-b12a-4f2b-8cbd-dc2dde8f4950] Running
	I0414 10:53:00.462566  511069 system_pods.go:61] "registry-proxy-xxh28" [d81741d1-e338-403d-b33c-fd7170977c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 10:53:00.462572  511069 system_pods.go:61] "snapshot-controller-68b874b76f-brdkp" [b20f68d6-911c-4e44-a5fe-d08236778233] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 10:53:00.462577  511069 system_pods.go:61] "snapshot-controller-68b874b76f-pdrcr" [30c0804e-0818-4022-a242-05da951b4a3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 10:53:00.462581  511069 system_pods.go:61] "storage-provisioner" [e3e74eb2-3ede-4430-a737-fc8c3e073b42] Running
	I0414 10:53:00.462587  511069 system_pods.go:74] duration metric: took 4.594762ms to wait for pod list to return data ...
	I0414 10:53:00.462594  511069 default_sa.go:34] waiting for default service account to be created ...
	I0414 10:53:00.464800  511069 default_sa.go:45] found service account: "default"
	I0414 10:53:00.464819  511069 default_sa.go:55] duration metric: took 2.219787ms for default service account to be created ...
	I0414 10:53:00.464827  511069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 10:53:00.467639  511069 system_pods.go:86] 18 kube-system pods found
	I0414 10:53:00.467667  511069 system_pods.go:89] "amd-gpu-device-plugin-rz6f4" [7b1b5bbf-c56d-47ff-9cae-44382a3f4c1a] Running
	I0414 10:53:00.467672  511069 system_pods.go:89] "coredns-668d6bf9bc-rsmkj" [464a4bac-5002-4d62-9428-7200ff295629] Running
	I0414 10:53:00.467680  511069 system_pods.go:89] "csi-hostpath-attacher-0" [2055dbd3-81e9-4d16-841c-a4c55a65ca7f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 10:53:00.467686  511069 system_pods.go:89] "csi-hostpath-resizer-0" [dfc3094a-97b3-4658-8745-675610948bc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 10:53:00.467694  511069 system_pods.go:89] "csi-hostpathplugin-ls4jr" [296e68ad-efd8-4024-9ed3-4970f05851b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 10:53:00.467700  511069 system_pods.go:89] "etcd-addons-345184" [8f35be97-bcf0-40e7-9ad0-635a6ba73641] Running
	I0414 10:53:00.467704  511069 system_pods.go:89] "kube-apiserver-addons-345184" [4b4ab6b9-d136-473e-acbd-ad9e7f0a1b8e] Running
	I0414 10:53:00.467708  511069 system_pods.go:89] "kube-controller-manager-addons-345184" [729f1dca-77f4-4f3e-94ae-ea3f91438679] Running
	I0414 10:53:00.467712  511069 system_pods.go:89] "kube-ingress-dns-minikube" [698943a5-f0e3-484a-932b-4d1e94eeb773] Running
	I0414 10:53:00.467716  511069 system_pods.go:89] "kube-proxy-4w7ch" [57644013-4e0e-4c0b-a866-10b409f96e8b] Running
	I0414 10:53:00.467719  511069 system_pods.go:89] "kube-scheduler-addons-345184" [6da1733c-241d-478b-a7fc-c3fefe53d9a3] Running
	I0414 10:53:00.467722  511069 system_pods.go:89] "metrics-server-7fbb699795-f2z9r" [dd686ce1-d4c9-4679-b521-c5a1faaf9cb3] Running
	I0414 10:53:00.467725  511069 system_pods.go:89] "nvidia-device-plugin-daemonset-9t2t4" [4dd6cb16-270e-4be0-b1ed-35041c492ac3] Running
	I0414 10:53:00.467729  511069 system_pods.go:89] "registry-6c88467877-mhchq" [83fd759f-b12a-4f2b-8cbd-dc2dde8f4950] Running
	I0414 10:53:00.467734  511069 system_pods.go:89] "registry-proxy-xxh28" [d81741d1-e338-403d-b33c-fd7170977c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 10:53:00.467739  511069 system_pods.go:89] "snapshot-controller-68b874b76f-brdkp" [b20f68d6-911c-4e44-a5fe-d08236778233] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 10:53:00.467747  511069 system_pods.go:89] "snapshot-controller-68b874b76f-pdrcr" [30c0804e-0818-4022-a242-05da951b4a3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 10:53:00.467752  511069 system_pods.go:89] "storage-provisioner" [e3e74eb2-3ede-4430-a737-fc8c3e073b42] Running
	I0414 10:53:00.467761  511069 system_pods.go:126] duration metric: took 2.929384ms to wait for k8s-apps to be running ...
	I0414 10:53:00.467768  511069 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 10:53:00.467822  511069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 10:53:00.493393  511069 system_svc.go:56] duration metric: took 25.61173ms WaitForService to wait for kubelet
	I0414 10:53:00.493435  511069 kubeadm.go:582] duration metric: took 46.521540657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 10:53:00.493464  511069 node_conditions.go:102] verifying NodePressure condition ...
	I0414 10:53:00.496289  511069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 10:53:00.496324  511069 node_conditions.go:123] node cpu capacity is 2
	I0414 10:53:00.496370  511069 node_conditions.go:105] duration metric: took 2.898456ms to run NodePressure ...
	I0414 10:53:00.496389  511069 start.go:241] waiting for startup goroutines ...
	I0414 10:53:00.655114  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:00.657406  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:00.676394  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:00.677482  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:01.154673  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:01.157863  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:01.176289  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:01.178136  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:01.656986  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:01.658824  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:01.675239  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:01.677182  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:02.156822  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:02.158613  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:02.176662  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:02.178497  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:02.654910  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:02.658650  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:02.676418  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:02.677485  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 10:53:03.155043  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:03.157407  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:03.177650  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:03.178117  511069 kapi.go:107] duration metric: took 41.00414368s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 10:53:03.655089  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:03.657540  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:03.676653  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:04.156030  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:04.158371  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:04.175783  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:04.654912  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:04.658264  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:04.676176  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:05.155005  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:05.157607  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:05.176239  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:05.654218  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:05.657567  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:05.676386  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:06.155099  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:06.157324  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:06.175740  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:06.655075  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:06.657478  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:06.676307  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:07.156816  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:07.158291  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:07.175769  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:07.655377  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:07.657569  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:07.676237  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:08.155509  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:08.157520  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:08.176698  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:08.654623  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:08.657790  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:08.676942  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:09.155196  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:09.157487  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:09.176355  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:09.655477  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:09.657829  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:09.676422  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:10.154746  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:10.157997  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:10.176132  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:10.655519  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:10.658220  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:10.675933  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:11.155740  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:11.157738  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:11.176562  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:11.655104  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:11.659753  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:11.676886  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:12.155458  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:12.157722  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:12.177048  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:12.655799  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:12.658264  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:12.678466  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:13.156558  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:13.163122  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:13.178548  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:13.654760  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:13.657899  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:13.675536  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:14.337313  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:14.337369  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:14.337479  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:14.668528  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:14.676372  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:14.767678  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:15.154560  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:15.158554  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:15.175839  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:15.655097  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:15.657771  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:15.677151  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:16.154139  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:16.158541  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:16.175932  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:16.656030  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:16.658212  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:16.675753  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:17.154663  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:17.157949  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:17.175795  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:17.655838  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:17.657777  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:17.676517  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:18.154881  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:18.158304  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:18.176146  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:18.655133  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:18.657431  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:18.676778  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:19.154867  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:19.158405  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:19.176407  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:19.736278  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:19.736632  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:19.737864  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:20.155127  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:20.157388  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:20.175714  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:20.654980  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:20.658287  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:20.676868  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:21.155557  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:21.157940  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:21.175690  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:21.659729  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:21.660755  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:21.760993  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:22.154794  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:22.158028  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:22.175528  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:22.655975  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:22.658205  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:22.676231  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:23.155408  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:23.157830  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:23.176001  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:23.655346  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:23.657972  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:23.675936  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:24.154974  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:24.157416  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:24.176364  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:24.660826  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:24.661067  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:24.676178  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:25.156376  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:25.158657  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:25.176522  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:25.654630  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:25.658348  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:25.675772  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:26.154664  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:26.158248  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:26.176298  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:26.845388  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:26.845533  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:26.845588  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:27.154862  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:27.159317  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:27.181324  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:27.655860  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:27.657848  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:27.675587  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:28.154854  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:28.157774  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:28.180307  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:28.655542  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:28.658217  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:28.676211  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:29.155747  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:29.157704  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:29.176251  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:29.655156  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:29.661175  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 10:53:29.676103  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:30.156267  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:30.163428  511069 kapi.go:107] duration metric: took 1m7.008452636s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 10:53:30.176068  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:30.656299  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:30.677197  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:31.155597  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:31.177050  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:31.654960  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:31.675906  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:32.155252  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:32.176382  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:32.654486  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:32.676448  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:33.155516  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:33.177779  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:33.655316  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:33.676121  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:34.154830  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:34.175728  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:34.654761  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:34.676583  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:35.154826  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:35.178770  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:35.654415  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:35.676225  511069 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 10:53:36.158326  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:36.176236  511069 kapi.go:107] duration metric: took 1m14.003535609s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 10:53:36.654398  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:37.154692  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:37.654625  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:38.155111  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:38.655834  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:39.156039  511069 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 10:53:39.655545  511069 kapi.go:107] duration metric: took 1m15.00398125s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 10:53:39.657081  511069 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-345184 cluster.
	I0414 10:53:39.658227  511069 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 10:53:39.659309  511069 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 10:53:39.660353  511069 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, metrics-server, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0414 10:53:39.661352  511069 addons.go:514] duration metric: took 1m25.689425255s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns metrics-server cloud-spanner amd-gpu-device-plugin inspektor-gadget storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0414 10:53:39.661410  511069 start.go:246] waiting for cluster config update ...
	I0414 10:53:39.661435  511069 start.go:255] writing updated cluster config ...
	I0414 10:53:39.661708  511069 ssh_runner.go:195] Run: rm -f paused
	I0414 10:53:39.716264  511069 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 10:53:39.718246  511069 out.go:177] * Done! kubectl is now configured to use "addons-345184" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.768561801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c73264c4-826a-43fa-9612-513c99b906b0 name=/runtime.v1.RuntimeService/Version
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.769600869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80047e87-aa38-4744-a8d0-a4ab9cbb4e81 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.770812751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628204770786330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80047e87-aa38-4744-a8d0-a4ab9cbb4e81 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.771349827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d82a980f-38dd-4c25-be7b-1a96c95c5eb2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.771448232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d82a980f-38dd-4c25-be7b-1a96c95c5eb2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.771739105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4b9f2b8fe947e8f0e20e12c65889f2991a7e8a026cd0f8c6085d4627efc3d0b,PodSandboxId:5f2c2aeb36b30d2f0d80c9b328376cea86a8d363d8f1945a90e24c623dbe294c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744628067778817129,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e12914ca-6f74-4966-9007-ca362d38742e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce92cca1743ed133116b2a9c80131e3f32c212d15979a636da6c453dcec8819,PodSandboxId:c14d597dc8ea9316895616d7143bf3efe0c30c8aceda9557ac9a13b259e89cdd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744628024699541386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df983c7-1248-4362-a462-b532a47b1844,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e0e374906f8fb0485c87037c6e5bb4705486e513d50ce1795ee763070b066,PodSandboxId:5f9a2ea6533736310be95ea4e4de632b7030faa1a7441df4b7b8e35789df4109,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744628015258744117,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rqcpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d0913b9-72c9-4e7b-87f4-fdb8659a2315,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:34ae3b66a6e4fd61b06bcbd36ae86a242541e9881491ce546c8dd8a9a3f0b1f7,PodSandboxId:67ae354d8683cdd0906dd922183e398d50da4d3708cec43c2503ac3dbe516561,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744627997223828067,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wpjgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6934125e-85ab-4199-9f62-bb86fe2635b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67a09b0bbbbe68063ac3bb62aff1c7496259645bfb4209656b56540e666443e,PodSandboxId:6b14740cd9ac0d415d9fdfaf43bd2818e4fc3b384b3f72b132c7dbc4c2de042a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744627996484938933,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zlh8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9c280756-038a-449d-8e7a-c1e860560ee3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c7a800adce4061d492bb8f26651c1df28ce3ece5205c8b6001bf3d3fb01de5,PodSandboxId:ca9a3070392b10b453c793b4239ec5f9099ca6005b77f3e255dbce9eed723adb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744627952246431192,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698943a5-f0e3-484a-932b-4d1e94eeb773,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d0ccd17caf1c6c39aa416ee71103b51237bfd34ba72ebf72a4240a6fbc4662,PodSandboxId:cfe5d050929fe82890e827b0d09d2c1a3067cf48b05c1ac9f79978a73e96e209,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744627944539115236,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-rz6f4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1b5bbf-c56d-47ff-9cae-44382a3f4c1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3b64fa07d47af0873525a7ff4da1fc7b568e06b0f6d407591c8eac4bd6fbc1c,PodSandboxId:ed1ff61d3955e2ff79a7fc8a0a725c27bd8516ef25612776b76b2a2efca5d7c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744627940198457771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e74eb2-3ede-4430-a737-fc8c3e073b42,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7967ae4aa6e6c83aa5a81784748c305ebd22343a8700c8fe9fe760c1c056732,PodSandboxId:f08d89afeb8a54a4d75041d277e454e1261506892ea561e27fc6e948c158735a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744627937029792901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rsmkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464a4bac-5002-4d62-9428-7200ff295629,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decfff48647b4688655b63b544cb3a52b0732f492deb336c6b9ec
843207d464c,PodSandboxId:264b61aba21c2b9d9289252d59b8adfc9227e96743e84cbd8a43bf8157d8960a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744627934403006642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4w7ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57644013-4e0e-4c0b-a866-10b409f96e8b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea40b8931598c1ffb396d5abedd1220323887dcfc843fab606e74c38edccea,PodSandboxId:7946d282d
8dc4450337fcfd21862faed1a67ff9b180db75ea800ba38ae6f0a68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744627923840378262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af8e54a731d98876d8e8d193995c7c1,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad89c48285f443228302c3cd21c5d9115c82e2c7337c5f4cb5d42f289d306fa6,PodSandboxId
:db5eaa55435b463c5a9b4eaac536452cf0f9e494aebb59077dde70daf0f0c00e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744627923857499022,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a32657f711e4678f122bebe4bb5845b,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae97f98e4fff677f2c59035cfdda49f7ed87df819878beb4eafb598a12cfe29,PodSandboxId:281deaa1424a335f
fb31624a110e27aa955d94e2e27f9c14dbb3838c039d1508,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744627923854496937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2b8dfb06a46480a84154aa8dd35746,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc57851a47fb52ff6d8d22236dee92864bc25d7e0408e575721028d934db01b7,PodSandboxId:6b0f82162dbe5e0271f3f66358d62b091
53608cf0a85894a7662d7965c5c0902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744627923838359921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f9f5c65e313ad553a604d0dabe58cd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d82a980f-38dd-4c25-be7b-1a96c95c5eb2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.805590112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0cff75f-f5f2-4515-8939-8fedbf2c84f5 name=/runtime.v1.RuntimeService/Version
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.805676592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0cff75f-f5f2-4515-8939-8fedbf2c84f5 name=/runtime.v1.RuntimeService/Version
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.806768608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4044d104-9721-4637-9105-5ce0b7ad9a51 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.808112569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628204808076677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4044d104-9721-4637-9105-5ce0b7ad9a51 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.808816898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a598860-aa53-4491-a8cf-926d0c7ddd07 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.808881381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a598860-aa53-4491-a8cf-926d0c7ddd07 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.810609410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4b9f2b8fe947e8f0e20e12c65889f2991a7e8a026cd0f8c6085d4627efc3d0b,PodSandboxId:5f2c2aeb36b30d2f0d80c9b328376cea86a8d363d8f1945a90e24c623dbe294c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744628067778817129,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e12914ca-6f74-4966-9007-ca362d38742e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce92cca1743ed133116b2a9c80131e3f32c212d15979a636da6c453dcec8819,PodSandboxId:c14d597dc8ea9316895616d7143bf3efe0c30c8aceda9557ac9a13b259e89cdd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744628024699541386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df983c7-1248-4362-a462-b532a47b1844,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e0e374906f8fb0485c87037c6e5bb4705486e513d50ce1795ee763070b066,PodSandboxId:5f9a2ea6533736310be95ea4e4de632b7030faa1a7441df4b7b8e35789df4109,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744628015258744117,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rqcpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d0913b9-72c9-4e7b-87f4-fdb8659a2315,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:34ae3b66a6e4fd61b06bcbd36ae86a242541e9881491ce546c8dd8a9a3f0b1f7,PodSandboxId:67ae354d8683cdd0906dd922183e398d50da4d3708cec43c2503ac3dbe516561,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744627997223828067,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wpjgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6934125e-85ab-4199-9f62-bb86fe2635b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67a09b0bbbbe68063ac3bb62aff1c7496259645bfb4209656b56540e666443e,PodSandboxId:6b14740cd9ac0d415d9fdfaf43bd2818e4fc3b384b3f72b132c7dbc4c2de042a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744627996484938933,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zlh8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9c280756-038a-449d-8e7a-c1e860560ee3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c7a800adce4061d492bb8f26651c1df28ce3ece5205c8b6001bf3d3fb01de5,PodSandboxId:ca9a3070392b10b453c793b4239ec5f9099ca6005b77f3e255dbce9eed723adb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744627952246431192,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698943a5-f0e3-484a-932b-4d1e94eeb773,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d0ccd17caf1c6c39aa416ee71103b51237bfd34ba72ebf72a4240a6fbc4662,PodSandboxId:cfe5d050929fe82890e827b0d09d2c1a3067cf48b05c1ac9f79978a73e96e209,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744627944539115236,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-rz6f4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1b5bbf-c56d-47ff-9cae-44382a3f4c1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3b64fa07d47af0873525a7ff4da1fc7b568e06b0f6d407591c8eac4bd6fbc1c,PodSandboxId:ed1ff61d3955e2ff79a7fc8a0a725c27bd8516ef25612776b76b2a2efca5d7c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744627940198457771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e74eb2-3ede-4430-a737-fc8c3e073b42,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7967ae4aa6e6c83aa5a81784748c305ebd22343a8700c8fe9fe760c1c056732,PodSandboxId:f08d89afeb8a54a4d75041d277e454e1261506892ea561e27fc6e948c158735a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744627937029792901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rsmkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464a4bac-5002-4d62-9428-7200ff295629,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decfff48647b4688655b63b544cb3a52b0732f492deb336c6b9ec
843207d464c,PodSandboxId:264b61aba21c2b9d9289252d59b8adfc9227e96743e84cbd8a43bf8157d8960a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744627934403006642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4w7ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57644013-4e0e-4c0b-a866-10b409f96e8b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea40b8931598c1ffb396d5abedd1220323887dcfc843fab606e74c38edccea,PodSandboxId:7946d282d
8dc4450337fcfd21862faed1a67ff9b180db75ea800ba38ae6f0a68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744627923840378262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af8e54a731d98876d8e8d193995c7c1,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad89c48285f443228302c3cd21c5d9115c82e2c7337c5f4cb5d42f289d306fa6,PodSandboxId
:db5eaa55435b463c5a9b4eaac536452cf0f9e494aebb59077dde70daf0f0c00e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744627923857499022,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a32657f711e4678f122bebe4bb5845b,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae97f98e4fff677f2c59035cfdda49f7ed87df819878beb4eafb598a12cfe29,PodSandboxId:281deaa1424a335f
fb31624a110e27aa955d94e2e27f9c14dbb3838c039d1508,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744627923854496937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2b8dfb06a46480a84154aa8dd35746,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc57851a47fb52ff6d8d22236dee92864bc25d7e0408e575721028d934db01b7,PodSandboxId:6b0f82162dbe5e0271f3f66358d62b091
53608cf0a85894a7662d7965c5c0902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744627923838359921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-345184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f9f5c65e313ad553a604d0dabe58cd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a598860-aa53-4491-a8cf-926d0c7ddd07 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.833676438Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.833970244Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.834889077Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835140440Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835183317Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835229883Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835329547Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835373805Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835436624Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835470455Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835505768Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Apr 14 10:56:44 addons-345184 crio[664]: time="2025-04-14 10:56:44.835559033Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4b9f2b8fe947       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   5f2c2aeb36b30       nginx
	fce92cca1743e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   c14d597dc8ea9       busybox
	f05e0e374906f       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   5f9a2ea653373       ingress-nginx-controller-56d7c84fd4-rqcpn
	34ae3b66a6e4f       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   67ae354d8683c       ingress-nginx-admission-patch-wpjgf
	a67a09b0bbbbe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   6b14740cd9ac0       ingress-nginx-admission-create-zlh8x
	f5c7a800adce4       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   ca9a3070392b1       kube-ingress-dns-minikube
	b1d0ccd17caf1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   cfe5d050929fe       amd-gpu-device-plugin-rz6f4
	a3b64fa07d47a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   ed1ff61d3955e       storage-provisioner
	b7967ae4aa6e6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   f08d89afeb8a5       coredns-668d6bf9bc-rsmkj
	decfff48647b4       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   264b61aba21c2       kube-proxy-4w7ch
	ad89c48285f44       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago       Running             kube-scheduler            0                   db5eaa55435b4       kube-scheduler-addons-345184
	1ae97f98e4fff       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago       Running             kube-apiserver            0                   281deaa1424a3       kube-apiserver-addons-345184
	05ea40b893159       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago       Running             kube-controller-manager   0                   7946d282d8dc4       kube-controller-manager-addons-345184
	fc57851a47fb5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   6b0f82162dbe5       etcd-addons-345184
	
	
	==> coredns [b7967ae4aa6e6c83aa5a81784748c305ebd22343a8700c8fe9fe760c1c056732] <==
	[INFO] 10.244.0.8:53819 - 37214 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000340485s
	[INFO] 10.244.0.8:53819 - 777 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000210208s
	[INFO] 10.244.0.8:53819 - 10836 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001898853s
	[INFO] 10.244.0.8:53819 - 61631 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00023114s
	[INFO] 10.244.0.8:53819 - 32094 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000127566s
	[INFO] 10.244.0.8:53819 - 31242 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000103201s
	[INFO] 10.244.0.8:53819 - 13091 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000078316s
	[INFO] 10.244.0.8:42027 - 59187 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000132175s
	[INFO] 10.244.0.8:42027 - 59570 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084563s
	[INFO] 10.244.0.8:43227 - 22645 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077142s
	[INFO] 10.244.0.8:43227 - 23070 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000176628s
	[INFO] 10.244.0.8:58245 - 44152 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177704s
	[INFO] 10.244.0.8:58245 - 44410 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103411s
	[INFO] 10.244.0.8:55797 - 1531 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100235s
	[INFO] 10.244.0.8:55797 - 1779 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109513s
	[INFO] 10.244.0.23:40879 - 18718 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000509484s
	[INFO] 10.244.0.23:45909 - 31673 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000149705s
	[INFO] 10.244.0.23:46982 - 21098 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116259s
	[INFO] 10.244.0.23:60361 - 26246 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091976s
	[INFO] 10.244.0.23:43584 - 25106 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152913s
	[INFO] 10.244.0.23:41708 - 7294 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116525s
	[INFO] 10.244.0.23:37665 - 24130 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000952151s
	[INFO] 10.244.0.23:54467 - 64708 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001402574s
	[INFO] 10.244.0.28:51358 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000843947s
	[INFO] 10.244.0.28:37401 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133342s
	
	
	==> describe nodes <==
	Name:               addons-345184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-345184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=addons-345184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T10_52_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-345184
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 10:52:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-345184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 10:56:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 10:54:41 +0000   Mon, 14 Apr 2025 10:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 10:54:41 +0000   Mon, 14 Apr 2025 10:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 10:54:41 +0000   Mon, 14 Apr 2025 10:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 10:54:41 +0000   Mon, 14 Apr 2025 10:52:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    addons-345184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 eef4331771184ac4bcde2452cebfd2f1
	  System UUID:                eef43317-7118-4ac4-bcde-2452cebfd2f1
	  Boot ID:                    29c9ce9e-f84b-44d1-8ac7-4e78634fd886
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     hello-world-app-7d9564db4-xtzzz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-rqcpn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-rz6f4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 coredns-668d6bf9bc-rsmkj                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m32s
	  kube-system                 etcd-addons-345184                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m37s
	  kube-system                 kube-apiserver-addons-345184                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-addons-345184        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-proxy-4w7ch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-scheduler-addons-345184                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m29s  kube-proxy       
	  Normal  Starting                 4m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m37s  kubelet          Node addons-345184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s  kubelet          Node addons-345184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s  kubelet          Node addons-345184 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m36s  kubelet          Node addons-345184 status is now: NodeReady
	  Normal  RegisteredNode           4m33s  node-controller  Node addons-345184 event: Registered Node addons-345184 in Controller
	
	
	==> dmesg <==
	[  +0.059046] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.491438] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.090848] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.767948] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.143970] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.015231] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.183492] kauditd_printk_skb: 150 callbacks suppressed
	[  +7.847059] kauditd_printk_skb: 69 callbacks suppressed
	[Apr14 10:53] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.276815] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.436825] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.341994] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.101366] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.876405] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.322625] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.578029] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.241075] kauditd_printk_skb: 2 callbacks suppressed
	[Apr14 10:54] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.380723] kauditd_printk_skb: 58 callbacks suppressed
	[  +8.469165] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.372814] kauditd_printk_skb: 72 callbacks suppressed
	[ +10.552554] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.930173] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.452616] kauditd_printk_skb: 57 callbacks suppressed
	[Apr14 10:56] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [fc57851a47fb52ff6d8d22236dee92864bc25d7e0408e575721028d934db01b7] <==
	{"level":"warn","ts":"2025-04-14T10:54:00.406334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.720304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T10:54:00.406378Z","caller":"traceutil/trace.go:171","msg":"trace[627604025] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1252; }","duration":"185.788815ms","start":"2025-04-14T10:54:00.220580Z","end":"2025-04-14T10:54:00.406369Z","steps":["trace[627604025] 'agreement among raft nodes before linearized reading'  (duration: 185.662988ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T10:54:00.406513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.783526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T10:54:00.406547Z","caller":"traceutil/trace.go:171","msg":"trace[1424988572] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1252; }","duration":"185.895781ms","start":"2025-04-14T10:54:00.220645Z","end":"2025-04-14T10:54:00.406541Z","steps":["trace[1424988572] 'agreement among raft nodes before linearized reading'  (duration: 185.850086ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T10:54:05.444164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.069639ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7068283289953124773 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" mod_revision:523 > success:<request_delete_range:<key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" > > failure:<request_range:<key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-04-14T10:54:05.444248Z","caller":"traceutil/trace.go:171","msg":"trace[1319707643] linearizableReadLoop","detail":"{readStateIndex:1333; appliedIndex:1332; }","duration":"353.130416ms","start":"2025-04-14T10:54:05.091109Z","end":"2025-04-14T10:54:05.444239Z","steps":["trace[1319707643] 'read index received'  (duration: 230.711384ms)","trace[1319707643] 'applied index is now lower than readState.Index'  (duration: 122.418161ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T10:54:05.444547Z","caller":"traceutil/trace.go:171","msg":"trace[1932936150] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1294; }","duration":"375.654914ms","start":"2025-04-14T10:54:05.068882Z","end":"2025-04-14T10:54:05.444537Z","steps":["trace[1932936150] 'process raft request'  (duration: 252.930259ms)","trace[1932936150] 'compare'  (duration: 121.982638ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T10:54:05.444601Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T10:54:05.068866Z","time spent":"375.702401ms","remote":"127.0.0.1:58784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":70,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" mod_revision:523 > success:<request_delete_range:<key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" > > failure:<request_range:<key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" > >"}
	{"level":"warn","ts":"2025-04-14T10:54:05.444638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.216052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2025-04-14T10:54:05.444703Z","caller":"traceutil/trace.go:171","msg":"trace[1704656800] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1294; }","duration":"354.312292ms","start":"2025-04-14T10:54:05.090383Z","end":"2025-04-14T10:54:05.444695Z","steps":["trace[1704656800] 'agreement among raft nodes before linearized reading'  (duration: 354.101109ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T10:54:05.444729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T10:54:05.090365Z","time spent":"354.355975ms","remote":"127.0.0.1:58488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":196,"response size":31,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"warn","ts":"2025-04-14T10:54:05.445149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.252104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T10:54:05.445193Z","caller":"traceutil/trace.go:171","msg":"trace[962823832] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:1296; }","duration":"288.317204ms","start":"2025-04-14T10:54:05.156868Z","end":"2025-04-14T10:54:05.445186Z","steps":["trace[962823832] 'agreement among raft nodes before linearized reading'  (duration: 288.213807ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T10:54:05.445384Z","caller":"traceutil/trace.go:171","msg":"trace[1687780963] transaction","detail":"{read_only:false; response_revision:1295; number_of_response:1; }","duration":"354.175075ms","start":"2025-04-14T10:54:05.091202Z","end":"2025-04-14T10:54:05.445377Z","steps":["trace[1687780963] 'process raft request'  (duration: 353.808031ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T10:54:05.445427Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T10:54:05.091187Z","time spent":"354.212524ms","remote":"127.0.0.1:58770","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3583,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:520 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:3547 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >"}
	{"level":"info","ts":"2025-04-14T10:54:05.445496Z","caller":"traceutil/trace.go:171","msg":"trace[502944420] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"352.027606ms","start":"2025-04-14T10:54:05.093463Z","end":"2025-04-14T10:54:05.445491Z","steps":["trace[502944420] 'process raft request'  (duration: 351.594831ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T10:54:05.445517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T10:54:05.093448Z","time spent":"352.057931ms","remote":"127.0.0.1:58770","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3461,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/edit\" mod_revision:1293 > success:<request_put:<key:\"/registry/clusterroles/edit\" value_size:3426 >> failure:<request_range:<key:\"/registry/clusterroles/edit\" > >"}
	{"level":"warn","ts":"2025-04-14T10:54:05.445673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.247071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T10:54:05.445688Z","caller":"traceutil/trace.go:171","msg":"trace[437511396] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1296; }","duration":"219.278614ms","start":"2025-04-14T10:54:05.226404Z","end":"2025-04-14T10:54:05.445683Z","steps":["trace[437511396] 'agreement among raft nodes before linearized reading'  (duration: 219.257175ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T10:54:13.431242Z","caller":"traceutil/trace.go:171","msg":"trace[1255331423] transaction","detail":"{read_only:false; response_revision:1372; number_of_response:1; }","duration":"333.805982ms","start":"2025-04-14T10:54:13.097414Z","end":"2025-04-14T10:54:13.431220Z","steps":["trace[1255331423] 'process raft request'  (duration: 333.693192ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T10:54:13.431397Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T10:54:13.097400Z","time spent":"333.940504ms","remote":"127.0.0.1:58572","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1361 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-04-14T10:54:13.436848Z","caller":"traceutil/trace.go:171","msg":"trace[287011344] linearizableReadLoop","detail":"{readStateIndex:1414; appliedIndex:1413; }","duration":"144.335702ms","start":"2025-04-14T10:54:13.292490Z","end":"2025-04-14T10:54:13.436825Z","steps":["trace[287011344] 'read index received'  (duration: 138.720705ms)","trace[287011344] 'applied index is now lower than readState.Index'  (duration: 5.613905ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T10:54:13.436961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.457893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/headlamp\" limit:1 ","response":"range_response_count:1 size:581"}
	{"level":"info","ts":"2025-04-14T10:54:13.436982Z","caller":"traceutil/trace.go:171","msg":"trace[463725445] range","detail":"{range_begin:/registry/namespaces/headlamp; range_end:; response_count:1; response_revision:1372; }","duration":"144.487711ms","start":"2025-04-14T10:54:13.292486Z","end":"2025-04-14T10:54:13.436974Z","steps":["trace[463725445] 'agreement among raft nodes before linearized reading'  (duration: 144.394639ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T10:54:13.593928Z","caller":"traceutil/trace.go:171","msg":"trace[630275898] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1374; }","duration":"103.760969ms","start":"2025-04-14T10:54:13.490150Z","end":"2025-04-14T10:54:13.593911Z","steps":["trace[630275898] 'process raft request'  (duration: 101.610671ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:56:45 up 5 min,  0 users,  load average: 0.45, 1.02, 0.53
	Linux addons-345184 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ae97f98e4fff677f2c59035cfdda49f7ed87df819878beb4eafb598a12cfe29] <==
	I0414 10:53:00.065172       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0414 10:53:50.463467       1 conn.go:339] Error on socket receive: read tcp 192.168.39.54:8443->192.168.39.1:47624: use of closed network connection
	E0414 10:53:50.647079       1 conn.go:339] Error on socket receive: read tcp 192.168.39.54:8443->192.168.39.1:47652: use of closed network connection
	I0414 10:54:00.431657       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.224.52"}
	I0414 10:54:22.620479       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 10:54:23.352900       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 10:54:23.533852       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.49.103"}
	I0414 10:54:24.542950       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0414 10:54:25.582524       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0414 10:54:41.605417       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0414 10:54:48.589526       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 10:54:48.589583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 10:54:48.626765       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 10:54:48.626876       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 10:54:48.637589       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 10:54:48.637639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 10:54:48.674521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 10:54:48.674618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 10:54:48.801531       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 10:54:48.801642       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 10:54:49.627809       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0414 10:54:49.803450       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0414 10:54:49.817173       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0414 10:55:01.020685       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 10:56:43.658745       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.69.19"}
	
	
	==> kube-controller-manager [05ea40b8931598c1ffb396d5abedd1220323887dcfc843fab606e74c38edccea] <==
	E0414 10:55:57.366892       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 10:56:01.923128       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 10:56:01.924040       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 10:56:01.924803       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 10:56:01.924874       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 10:56:19.469033       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 10:56:19.470035       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 10:56:19.471008       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 10:56:19.471035       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 10:56:30.190228       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 10:56:30.190990       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 10:56:30.191775       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 10:56:30.191827       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 10:56:33.311003       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 10:56:33.311901       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 10:56:33.312755       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 10:56:33.312803       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 10:56:43.459979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="26.347239ms"
	I0414 10:56:43.484977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.944858ms"
	I0414 10:56:43.514908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="29.804972ms"
	I0414 10:56:43.515166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="100.666µs"
	W0414 10:56:43.801814       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 10:56:43.804206       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 10:56:43.805804       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 10:56:43.805859       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [decfff48647b4688655b63b544cb3a52b0732f492deb336c6b9ec843207d464c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 10:52:15.084320       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 10:52:15.093979       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0414 10:52:15.094047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 10:52:15.153891       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 10:52:15.153945       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 10:52:15.153966       1 server_linux.go:170] "Using iptables Proxier"
	I0414 10:52:15.156371       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 10:52:15.156640       1 server.go:497] "Version info" version="v1.32.2"
	I0414 10:52:15.156653       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 10:52:15.158609       1 config.go:199] "Starting service config controller"
	I0414 10:52:15.158627       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 10:52:15.158668       1 config.go:105] "Starting endpoint slice config controller"
	I0414 10:52:15.158672       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 10:52:15.159187       1 config.go:329] "Starting node config controller"
	I0414 10:52:15.159194       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 10:52:15.259314       1 shared_informer.go:320] Caches are synced for node config
	I0414 10:52:15.259343       1 shared_informer.go:320] Caches are synced for service config
	I0414 10:52:15.259352       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ad89c48285f443228302c3cd21c5d9115c82e2c7337c5f4cb5d42f289d306fa6] <==
	W0414 10:52:06.206162       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 10:52:06.206210       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:06.206324       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 10:52:06.206376       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:06.206404       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 10:52:06.206459       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:06.206337       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 10:52:06.206500       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:06.206594       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 10:52:06.206663       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 10:52:07.174111       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 10:52:07.174206       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:07.183298       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 10:52:07.183348       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:07.239435       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 10:52:07.239525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:07.338865       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 10:52:07.340142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:07.370863       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 10:52:07.370955       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:07.450659       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 10:52:07.450709       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 10:52:07.575458       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 10:52:07.575501       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0414 10:52:10.499843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 10:56:08 addons-345184 kubelet[1237]: E0414 10:56:08.662214    1237 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628168661914769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:08 addons-345184 kubelet[1237]: E0414 10:56:08.662372    1237 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628168661914769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:16 addons-345184 kubelet[1237]: I0414 10:56:16.472191    1237 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-rz6f4" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 10:56:18 addons-345184 kubelet[1237]: E0414 10:56:18.664609    1237 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628178664235958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:18 addons-345184 kubelet[1237]: E0414 10:56:18.664646    1237 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628178664235958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:28 addons-345184 kubelet[1237]: E0414 10:56:28.667485    1237 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628188667194342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:28 addons-345184 kubelet[1237]: E0414 10:56:28.667749    1237 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628188667194342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:30 addons-345184 kubelet[1237]: I0414 10:56:30.472455    1237 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 10:56:38 addons-345184 kubelet[1237]: E0414 10:56:38.672101    1237 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628198671469632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:38 addons-345184 kubelet[1237]: E0414 10:56:38.672150    1237 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628198671469632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597141,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464915    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="dfc3094a-97b3-4658-8745-675610948bc1" containerName="csi-resizer"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464967    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="296e68ad-efd8-4024-9ed3-4970f05851b0" containerName="csi-provisioner"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464975    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="296e68ad-efd8-4024-9ed3-4970f05851b0" containerName="liveness-probe"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464981    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="2040774e-5f10-40e1-9bef-f49f9d881bb2" containerName="task-pv-container"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464986    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="296e68ad-efd8-4024-9ed3-4970f05851b0" containerName="hostpath"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464992    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="416b9696-2b3f-4855-a039-b23faa08f181" containerName="cloud-spanner-emulator"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.464997    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="2055dbd3-81e9-4d16-841c-a4c55a65ca7f" containerName="csi-attacher"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465002    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="30c0804e-0818-4022-a242-05da951b4a3b" containerName="volume-snapshot-controller"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465007    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="296e68ad-efd8-4024-9ed3-4970f05851b0" containerName="node-driver-registrar"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465011    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="296e68ad-efd8-4024-9ed3-4970f05851b0" containerName="csi-snapshotter"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465017    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="b20f68d6-911c-4e44-a5fe-d08236778233" containerName="volume-snapshot-controller"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465023    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="4dd6cb16-270e-4be0-b1ed-35041c492ac3" containerName="nvidia-device-plugin-ctr"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465027    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="fdbf1d63-7368-4607-8331-9836c67ba85a" containerName="local-path-provisioner"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.465032    1237 memory_manager.go:355] "RemoveStaleState removing state" podUID="296e68ad-efd8-4024-9ed3-4970f05851b0" containerName="csi-external-health-monitor-controller"
	Apr 14 10:56:43 addons-345184 kubelet[1237]: I0414 10:56:43.502712    1237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsrm\" (UniqueName: \"kubernetes.io/projected/d32756fa-fac6-4bf0-adfa-c159394827d4-kube-api-access-jvsrm\") pod \"hello-world-app-7d9564db4-xtzzz\" (UID: \"d32756fa-fac6-4bf0-adfa-c159394827d4\") " pod="default/hello-world-app-7d9564db4-xtzzz"
	
	
	==> storage-provisioner [a3b64fa07d47af0873525a7ff4da1fc7b568e06b0f6d407591c8eac4bd6fbc1c] <==
	I0414 10:52:21.030682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 10:52:21.058924       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 10:52:21.058972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 10:52:21.074874       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 10:52:21.075826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-345184_7448a9d1-a8b2-4087-bf50-02b1876f2720!
	I0414 10:52:21.077881       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"961c4a02-5513-44ac-b9c0-1a722535e933", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-345184_7448a9d1-a8b2-4087-bf50-02b1876f2720 became leader
	I0414 10:52:21.176611       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-345184_7448a9d1-a8b2-4087-bf50-02b1876f2720!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-345184 -n addons-345184
helpers_test.go:261: (dbg) Run:  kubectl --context addons-345184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-xtzzz ingress-nginx-admission-create-zlh8x ingress-nginx-admission-patch-wpjgf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-345184 describe pod hello-world-app-7d9564db4-xtzzz ingress-nginx-admission-create-zlh8x ingress-nginx-admission-patch-wpjgf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-345184 describe pod hello-world-app-7d9564db4-xtzzz ingress-nginx-admission-create-zlh8x ingress-nginx-admission-patch-wpjgf: exit status 1 (71.059806ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-xtzzz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-345184/192.168.39.54
	Start Time:       Mon, 14 Apr 2025 10:56:43 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvsrm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jvsrm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-xtzzz to addons-345184
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zlh8x" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wpjgf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-345184 describe pod hello-world-app-7d9564db4-xtzzz ingress-nginx-admission-create-zlh8x ingress-nginx-admission-patch-wpjgf: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable ingress-dns --alsologtostderr -v=1: (1.358259107s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable ingress --alsologtostderr -v=1: (7.679716269s)
--- FAIL: TestAddons/parallel/Ingress (151.97s)

                                                
                                    
x
+
TestPreload (199.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-112466 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-112466 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m4.803734134s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-112466 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-112466 image pull gcr.io/k8s-minikube/busybox: (3.896373866s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-112466
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-112466: (6.580652608s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-112466 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0414 11:47:20.784556  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-112466 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.919192081s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-112466 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-14 11:48:05.175154346 +0000 UTC m=+3415.981330458
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-112466 -n test-preload-112466
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-112466 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-099605 ssh -n                                                                 | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:32 UTC |
	|         | multinode-099605-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-099605 ssh -n multinode-099605 sudo cat                                       | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:32 UTC |
	|         | /home/docker/cp-test_multinode-099605-m03_multinode-099605.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-099605 cp multinode-099605-m03:/home/docker/cp-test.txt                       | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:32 UTC |
	|         | multinode-099605-m02:/home/docker/cp-test_multinode-099605-m03_multinode-099605-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-099605 ssh -n                                                                 | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:32 UTC |
	|         | multinode-099605-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-099605 ssh -n multinode-099605-m02 sudo cat                                   | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:32 UTC |
	|         | /home/docker/cp-test_multinode-099605-m03_multinode-099605-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-099605 node stop m03                                                          | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:32 UTC |
	| node    | multinode-099605 node start                                                             | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:32 UTC | 14 Apr 25 11:33 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-099605                                                                | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:33 UTC |                     |
	| stop    | -p multinode-099605                                                                     | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:33 UTC | 14 Apr 25 11:36 UTC |
	| start   | -p multinode-099605                                                                     | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:36 UTC | 14 Apr 25 11:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-099605                                                                | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC |                     |
	| node    | multinode-099605 node delete                                                            | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC | 14 Apr 25 11:39 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-099605 stop                                                                   | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC | 14 Apr 25 11:42 UTC |
	| start   | -p multinode-099605                                                                     | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:42 UTC | 14 Apr 25 11:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-099605                                                                | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC |                     |
	| start   | -p multinode-099605-m02                                                                 | multinode-099605-m02 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-099605-m03                                                                 | multinode-099605-m03 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-099605                                                                 | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC |                     |
	| delete  | -p multinode-099605-m03                                                                 | multinode-099605-m03 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
	| delete  | -p multinode-099605                                                                     | multinode-099605     | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
	| start   | -p test-preload-112466                                                                  | test-preload-112466  | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:46 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-112466 image pull                                                          | test-preload-112466  | jenkins | v1.35.0 | 14 Apr 25 11:46 UTC | 14 Apr 25 11:46 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-112466                                                                  | test-preload-112466  | jenkins | v1.35.0 | 14 Apr 25 11:46 UTC | 14 Apr 25 11:47 UTC |
	| start   | -p test-preload-112466                                                                  | test-preload-112466  | jenkins | v1.35.0 | 14 Apr 25 11:47 UTC | 14 Apr 25 11:48 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-112466 image list                                                          | test-preload-112466  | jenkins | v1.35.0 | 14 Apr 25 11:48 UTC | 14 Apr 25 11:48 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:47:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:47:04.085128  541813 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:47:04.085227  541813 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:47:04.085231  541813 out.go:358] Setting ErrFile to fd 2...
	I0414 11:47:04.085235  541813 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:47:04.085396  541813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:47:04.085931  541813 out.go:352] Setting JSON to false
	I0414 11:47:04.086898  541813 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":19775,"bootTime":1744611449,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:47:04.087009  541813 start.go:139] virtualization: kvm guest
	I0414 11:47:04.089642  541813 out.go:177] * [test-preload-112466] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:47:04.091219  541813 notify.go:220] Checking for updates...
	I0414 11:47:04.091257  541813 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:47:04.092635  541813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:47:04.093924  541813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:47:04.095193  541813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:47:04.096375  541813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:47:04.097575  541813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:47:04.099363  541813 config.go:182] Loaded profile config "test-preload-112466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 11:47:04.099954  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:04.100028  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:04.116190  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0414 11:47:04.116710  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:04.117332  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:04.117361  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:04.117778  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:04.118028  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:04.120002  541813 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 11:47:04.121317  541813 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:47:04.121668  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:04.121712  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:04.136937  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39999
	I0414 11:47:04.137337  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:04.137775  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:04.137801  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:04.138119  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:04.138308  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:04.174340  541813 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 11:47:04.175572  541813 start.go:297] selected driver: kvm2
	I0414 11:47:04.175591  541813 start.go:901] validating driver "kvm2" against &{Name:test-preload-112466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-112466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:47:04.175716  541813 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:47:04.176523  541813 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:47:04.176631  541813 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 11:47:04.192765  541813 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 11:47:04.193151  541813 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 11:47:04.193192  541813 cni.go:84] Creating CNI manager for ""
	I0414 11:47:04.193241  541813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:47:04.193293  541813 start.go:340] cluster config:
	{Name:test-preload-112466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-112466 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:47:04.193395  541813 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:47:04.195614  541813 out.go:177] * Starting "test-preload-112466" primary control-plane node in "test-preload-112466" cluster
	I0414 11:47:04.196599  541813 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 11:47:04.221813  541813 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 11:47:04.221847  541813 cache.go:56] Caching tarball of preloaded images
	I0414 11:47:04.222038  541813 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 11:47:04.223553  541813 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0414 11:47:04.224704  541813 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 11:47:04.252319  541813 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 11:47:08.729851  541813 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 11:47:08.729959  541813 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 11:47:09.594909  541813 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0414 11:47:09.595050  541813 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/config.json ...
	I0414 11:47:09.595282  541813 start.go:360] acquireMachinesLock for test-preload-112466: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 11:47:09.595371  541813 start.go:364] duration metric: took 49.019µs to acquireMachinesLock for "test-preload-112466"
	I0414 11:47:09.595388  541813 start.go:96] Skipping create...Using existing machine configuration
	I0414 11:47:09.595394  541813 fix.go:54] fixHost starting: 
	I0414 11:47:09.595662  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:09.595705  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:09.610860  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0414 11:47:09.611333  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:09.611727  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:09.611753  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:09.612196  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:09.612386  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:09.612542  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetState
	I0414 11:47:09.614164  541813 fix.go:112] recreateIfNeeded on test-preload-112466: state=Stopped err=<nil>
	I0414 11:47:09.614191  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	W0414 11:47:09.614332  541813 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 11:47:09.616174  541813 out.go:177] * Restarting existing kvm2 VM for "test-preload-112466" ...
	I0414 11:47:09.617505  541813 main.go:141] libmachine: (test-preload-112466) Calling .Start
	I0414 11:47:09.617665  541813 main.go:141] libmachine: (test-preload-112466) starting domain...
	I0414 11:47:09.617686  541813 main.go:141] libmachine: (test-preload-112466) ensuring networks are active...
	I0414 11:47:09.618478  541813 main.go:141] libmachine: (test-preload-112466) Ensuring network default is active
	I0414 11:47:09.618817  541813 main.go:141] libmachine: (test-preload-112466) Ensuring network mk-test-preload-112466 is active
	I0414 11:47:09.619148  541813 main.go:141] libmachine: (test-preload-112466) getting domain XML...
	I0414 11:47:09.619897  541813 main.go:141] libmachine: (test-preload-112466) creating domain...
	I0414 11:47:10.840697  541813 main.go:141] libmachine: (test-preload-112466) waiting for IP...
	I0414 11:47:10.841619  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:10.842037  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:10.842167  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:10.842055  541864 retry.go:31] will retry after 275.989588ms: waiting for domain to come up
	I0414 11:47:11.119775  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:11.120267  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:11.120345  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:11.120244  541864 retry.go:31] will retry after 278.98751ms: waiting for domain to come up
	I0414 11:47:11.401114  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:11.401524  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:11.401548  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:11.401493  541864 retry.go:31] will retry after 370.791379ms: waiting for domain to come up
	I0414 11:47:11.774190  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:11.774616  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:11.774647  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:11.774569  541864 retry.go:31] will retry after 482.343516ms: waiting for domain to come up
	I0414 11:47:12.258272  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:12.258722  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:12.258756  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:12.258704  541864 retry.go:31] will retry after 550.043724ms: waiting for domain to come up
	I0414 11:47:12.810608  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:12.811120  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:12.811210  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:12.811123  541864 retry.go:31] will retry after 931.128615ms: waiting for domain to come up
	I0414 11:47:13.743586  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:13.744070  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:13.744102  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:13.744049  541864 retry.go:31] will retry after 1.155159862s: waiting for domain to come up
	I0414 11:47:14.901113  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:14.901470  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:14.901500  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:14.901428  541864 retry.go:31] will retry after 1.289962423s: waiting for domain to come up
	I0414 11:47:16.193049  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:16.193546  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:16.193580  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:16.193509  541864 retry.go:31] will retry after 1.323448001s: waiting for domain to come up
	I0414 11:47:17.519012  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:17.519576  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:17.519602  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:17.519554  541864 retry.go:31] will retry after 2.003427163s: waiting for domain to come up
	I0414 11:47:19.525207  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:19.525681  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:19.525762  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:19.525678  541864 retry.go:31] will retry after 2.213546757s: waiting for domain to come up
	I0414 11:47:21.742765  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:21.743153  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:21.743188  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:21.743131  541864 retry.go:31] will retry after 3.260775448s: waiting for domain to come up
	I0414 11:47:25.005322  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:25.005744  541813 main.go:141] libmachine: (test-preload-112466) DBG | unable to find current IP address of domain test-preload-112466 in network mk-test-preload-112466
	I0414 11:47:25.005772  541813 main.go:141] libmachine: (test-preload-112466) DBG | I0414 11:47:25.005698  541864 retry.go:31] will retry after 4.278958827s: waiting for domain to come up
	I0414 11:47:29.289239  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.289706  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has current primary IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.289727  541813 main.go:141] libmachine: (test-preload-112466) found domain IP: 192.168.39.140
	I0414 11:47:29.289738  541813 main.go:141] libmachine: (test-preload-112466) reserving static IP address...
	I0414 11:47:29.290155  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "test-preload-112466", mac: "52:54:00:20:d3:db", ip: "192.168.39.140"} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.290186  541813 main.go:141] libmachine: (test-preload-112466) reserved static IP address 192.168.39.140 for domain test-preload-112466
	I0414 11:47:29.290204  541813 main.go:141] libmachine: (test-preload-112466) DBG | skip adding static IP to network mk-test-preload-112466 - found existing host DHCP lease matching {name: "test-preload-112466", mac: "52:54:00:20:d3:db", ip: "192.168.39.140"}
	I0414 11:47:29.290219  541813 main.go:141] libmachine: (test-preload-112466) DBG | Getting to WaitForSSH function...
	I0414 11:47:29.290235  541813 main.go:141] libmachine: (test-preload-112466) waiting for SSH...
	I0414 11:47:29.292571  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.292945  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.292975  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.293115  541813 main.go:141] libmachine: (test-preload-112466) DBG | Using SSH client type: external
	I0414 11:47:29.293136  541813 main.go:141] libmachine: (test-preload-112466) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa (-rw-------)
	I0414 11:47:29.293163  541813 main.go:141] libmachine: (test-preload-112466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 11:47:29.293172  541813 main.go:141] libmachine: (test-preload-112466) DBG | About to run SSH command:
	I0414 11:47:29.293192  541813 main.go:141] libmachine: (test-preload-112466) DBG | exit 0
	I0414 11:47:29.415175  541813 main.go:141] libmachine: (test-preload-112466) DBG | SSH cmd err, output: <nil>: 
	I0414 11:47:29.415493  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetConfigRaw
	I0414 11:47:29.416113  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetIP
	I0414 11:47:29.418629  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.419013  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.419038  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.419322  541813 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/config.json ...
	I0414 11:47:29.419566  541813 machine.go:93] provisionDockerMachine start ...
	I0414 11:47:29.419593  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:29.419818  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:29.422102  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.422418  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.422447  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.422539  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:29.422704  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:29.422857  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:29.423014  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:29.423191  541813 main.go:141] libmachine: Using SSH client type: native
	I0414 11:47:29.423521  541813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0414 11:47:29.423536  541813 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 11:47:29.523472  541813 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 11:47:29.523515  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetMachineName
	I0414 11:47:29.523800  541813 buildroot.go:166] provisioning hostname "test-preload-112466"
	I0414 11:47:29.523831  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetMachineName
	I0414 11:47:29.524039  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:29.526650  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.526929  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.526954  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.527176  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:29.527406  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:29.527593  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:29.527737  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:29.528001  541813 main.go:141] libmachine: Using SSH client type: native
	I0414 11:47:29.528310  541813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0414 11:47:29.528328  541813 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-112466 && echo "test-preload-112466" | sudo tee /etc/hostname
	I0414 11:47:29.640778  541813 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-112466
	
	I0414 11:47:29.640816  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:29.643506  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.643820  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.643854  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.643991  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:29.644209  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:29.644380  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:29.644501  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:29.644655  541813 main.go:141] libmachine: Using SSH client type: native
	I0414 11:47:29.644885  541813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0414 11:47:29.644902  541813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-112466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-112466/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-112466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 11:47:29.751560  541813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 11:47:29.751595  541813 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 11:47:29.751625  541813 buildroot.go:174] setting up certificates
	I0414 11:47:29.751635  541813 provision.go:84] configureAuth start
	I0414 11:47:29.751644  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetMachineName
	I0414 11:47:29.751961  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetIP
	I0414 11:47:29.754804  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.755185  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.755217  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.755394  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:29.757575  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.757885  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:29.757918  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:29.758019  541813 provision.go:143] copyHostCerts
	I0414 11:47:29.758091  541813 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 11:47:29.758121  541813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 11:47:29.758184  541813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 11:47:29.758282  541813 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 11:47:29.758291  541813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 11:47:29.758318  541813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 11:47:29.758422  541813 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 11:47:29.758432  541813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 11:47:29.758456  541813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 11:47:29.758506  541813 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.test-preload-112466 san=[127.0.0.1 192.168.39.140 localhost minikube test-preload-112466]
	I0414 11:47:30.108933  541813 provision.go:177] copyRemoteCerts
	I0414 11:47:30.108994  541813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 11:47:30.109019  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:30.111413  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.111742  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.111776  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.111969  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:30.112203  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.112391  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:30.112546  541813 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa Username:docker}
	I0414 11:47:30.193665  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 11:47:30.220046  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0414 11:47:30.245569  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 11:47:30.271016  541813 provision.go:87] duration metric: took 519.366258ms to configureAuth
	I0414 11:47:30.271049  541813 buildroot.go:189] setting minikube options for container-runtime
	I0414 11:47:30.271251  541813 config.go:182] Loaded profile config "test-preload-112466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 11:47:30.271361  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:30.273868  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.274193  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.274222  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.274392  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:30.274578  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.274747  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.274862  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:30.274989  541813 main.go:141] libmachine: Using SSH client type: native
	I0414 11:47:30.275217  541813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0414 11:47:30.275234  541813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 11:47:30.486198  541813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 11:47:30.486230  541813 machine.go:96] duration metric: took 1.066646352s to provisionDockerMachine
	I0414 11:47:30.486245  541813 start.go:293] postStartSetup for "test-preload-112466" (driver="kvm2")
	I0414 11:47:30.486260  541813 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 11:47:30.486283  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:30.486648  541813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 11:47:30.486691  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:30.489338  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.489663  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.489696  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.489831  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:30.490056  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.490204  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:30.490369  541813 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa Username:docker}
	I0414 11:47:30.569818  541813 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 11:47:30.573814  541813 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 11:47:30.573841  541813 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 11:47:30.573923  541813 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 11:47:30.574020  541813 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 11:47:30.574134  541813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 11:47:30.582949  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:47:30.606350  541813 start.go:296] duration metric: took 120.084407ms for postStartSetup
	I0414 11:47:30.606412  541813 fix.go:56] duration metric: took 21.011016349s for fixHost
	I0414 11:47:30.606438  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:30.609625  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.609991  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.610016  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.610249  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:30.610513  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.610707  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.610856  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:30.611022  541813 main.go:141] libmachine: Using SSH client type: native
	I0414 11:47:30.611224  541813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0414 11:47:30.611235  541813 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 11:47:30.712334  541813 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744631250.689788049
	
	I0414 11:47:30.712367  541813 fix.go:216] guest clock: 1744631250.689788049
	I0414 11:47:30.712379  541813 fix.go:229] Guest: 2025-04-14 11:47:30.689788049 +0000 UTC Remote: 2025-04-14 11:47:30.606417413 +0000 UTC m=+26.558872919 (delta=83.370636ms)
	I0414 11:47:30.712410  541813 fix.go:200] guest clock delta is within tolerance: 83.370636ms
	I0414 11:47:30.712419  541813 start.go:83] releasing machines lock for "test-preload-112466", held for 21.117037886s
	I0414 11:47:30.712453  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:30.712763  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetIP
	I0414 11:47:30.715678  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.716018  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.716044  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.716178  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:30.716691  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:30.716901  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:30.717009  541813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 11:47:30.717057  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:30.717107  541813 ssh_runner.go:195] Run: cat /version.json
	I0414 11:47:30.717137  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:30.719793  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.719910  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.720172  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.720207  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.720336  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:30.720362  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:30.720371  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:30.720574  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:30.720614  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.720754  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:30.720759  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:30.720918  541813 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa Username:docker}
	I0414 11:47:30.721057  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:30.721220  541813 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa Username:docker}
	I0414 11:47:30.818364  541813 ssh_runner.go:195] Run: systemctl --version
	I0414 11:47:30.824454  541813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 11:47:30.966407  541813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 11:47:30.972819  541813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 11:47:30.972898  541813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 11:47:30.987949  541813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 11:47:30.987983  541813 start.go:495] detecting cgroup driver to use...
	I0414 11:47:30.988067  541813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 11:47:31.003708  541813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 11:47:31.016832  541813 docker.go:217] disabling cri-docker service (if available) ...
	I0414 11:47:31.016909  541813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 11:47:31.029972  541813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 11:47:31.043146  541813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 11:47:31.157357  541813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 11:47:31.309051  541813 docker.go:233] disabling docker service ...
	I0414 11:47:31.309133  541813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 11:47:31.322580  541813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 11:47:31.335414  541813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 11:47:31.453059  541813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 11:47:31.566322  541813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 11:47:31.579848  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 11:47:31.596963  541813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0414 11:47:31.597029  541813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.607200  541813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 11:47:31.607279  541813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.617264  541813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.627712  541813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.638051  541813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 11:47:31.648619  541813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.658677  541813 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.675838  541813 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:47:31.685687  541813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 11:47:31.694930  541813 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 11:47:31.695022  541813 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 11:47:31.707111  541813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 11:47:31.716720  541813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:47:31.829232  541813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 11:47:31.920755  541813 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 11:47:31.920839  541813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 11:47:31.925396  541813 start.go:563] Will wait 60s for crictl version
	I0414 11:47:31.925477  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:31.928972  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 11:47:31.968520  541813 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 11:47:31.968600  541813 ssh_runner.go:195] Run: crio --version
	I0414 11:47:31.993976  541813 ssh_runner.go:195] Run: crio --version
	I0414 11:47:32.022238  541813 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0414 11:47:32.023430  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetIP
	I0414 11:47:32.026241  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:32.026658  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:32.026691  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:32.026978  541813 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 11:47:32.030769  541813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 11:47:32.042007  541813 kubeadm.go:883] updating cluster {Name:test-preload-112466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-112466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 11:47:32.042127  541813 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 11:47:32.042175  541813 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:47:32.074990  541813 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 11:47:32.075074  541813 ssh_runner.go:195] Run: which lz4
	I0414 11:47:32.078871  541813 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 11:47:32.082807  541813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 11:47:32.082842  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0414 11:47:33.423742  541813 crio.go:462] duration metric: took 1.344913678s to copy over tarball
	I0414 11:47:33.423850  541813 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 11:47:35.797782  541813 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373901695s)
	I0414 11:47:35.797811  541813 crio.go:469] duration metric: took 2.374019291s to extract the tarball
	I0414 11:47:35.797819  541813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 11:47:35.838689  541813 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:47:35.883009  541813 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 11:47:35.883036  541813 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 11:47:35.883119  541813 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:47:35.883127  541813 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:35.883142  541813 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:35.883190  541813 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:35.883187  541813 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:35.883197  541813 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:35.883216  541813 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0414 11:47:35.883226  541813 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:35.884795  541813 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:35.884805  541813 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:47:35.884814  541813 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:35.884814  541813 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:35.884797  541813 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:35.884801  541813 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:35.884795  541813 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:35.884861  541813 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0414 11:47:36.040663  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0414 11:47:36.045036  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:36.046283  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:36.050603  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:36.059786  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:36.116911  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:36.119120  541813 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0414 11:47:36.119168  541813 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0414 11:47:36.119218  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.127639  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:36.165671  541813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0414 11:47:36.165727  541813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:36.165733  541813 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0414 11:47:36.165784  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.165796  541813 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:36.165836  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.173882  541813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0414 11:47:36.173938  541813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:36.173985  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.181122  541813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0414 11:47:36.181172  541813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:36.181221  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.208128  541813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0414 11:47:36.208174  541813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:36.208221  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.208231  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 11:47:36.216997  541813 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0414 11:47:36.217054  541813 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:36.217089  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:36.217102  541813 ssh_runner.go:195] Run: which crictl
	I0414 11:47:36.217184  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:36.217232  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:36.217302  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:36.303657  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 11:47:36.303808  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:36.322111  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:36.322111  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:36.322235  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:36.322247  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:36.322357  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:36.409121  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:36.409149  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 11:47:36.479026  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 11:47:36.479089  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 11:47:36.479051  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 11:47:36.479166  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:36.479201  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 11:47:36.553409  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 11:47:36.553482  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0414 11:47:36.553697  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0414 11:47:36.617115  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0414 11:47:36.617129  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0414 11:47:36.617220  541813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 11:47:36.617242  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0414 11:47:36.617281  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 11:47:36.623651  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0414 11:47:36.623677  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0414 11:47:36.623772  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 11:47:36.623771  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 11:47:36.670835  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0414 11:47:36.670895  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0414 11:47:36.670916  541813 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0414 11:47:36.670937  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0414 11:47:36.670945  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 11:47:36.670962  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0414 11:47:36.673568  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0414 11:47:36.673608  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0414 11:47:36.673642  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0414 11:47:36.673644  541813 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0414 11:47:36.673744  541813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0414 11:47:36.677577  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0414 11:47:37.498054  541813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:47:39.427584  541813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.756591008s)
	I0414 11:47:39.427623  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0414 11:47:39.427649  541813 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 11:47:39.427721  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 11:47:39.427733  541813 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.753960788s)
	I0414 11:47:39.427780  541813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0414 11:47:39.427791  541813 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.929697964s)
	I0414 11:47:40.168195  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0414 11:47:40.168249  541813 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0414 11:47:40.168308  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0414 11:47:40.510028  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0414 11:47:40.510079  541813 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 11:47:40.510135  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 11:47:40.953622  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0414 11:47:40.953676  541813 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 11:47:40.953744  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 11:47:41.697768  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0414 11:47:41.697834  541813 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 11:47:41.697889  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 11:47:42.540987  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0414 11:47:42.541053  541813 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0414 11:47:42.541133  541813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0414 11:47:44.690410  541813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.149241183s)
	I0414 11:47:44.690450  541813 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0414 11:47:44.690487  541813 cache_images.go:123] Successfully loaded all cached images
	I0414 11:47:44.690496  541813 cache_images.go:92] duration metric: took 8.807440436s to LoadCachedImages
	I0414 11:47:44.690511  541813 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.24.4 crio true true} ...
	I0414 11:47:44.690633  541813 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-112466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-112466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 11:47:44.690721  541813 ssh_runner.go:195] Run: crio config
	I0414 11:47:44.734276  541813 cni.go:84] Creating CNI manager for ""
	I0414 11:47:44.734300  541813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:47:44.734314  541813 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 11:47:44.734338  541813 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-112466 NodeName:test-preload-112466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 11:47:44.734496  541813 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-112466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 11:47:44.734582  541813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0414 11:47:44.744362  541813 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 11:47:44.744427  541813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 11:47:44.753301  541813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0414 11:47:44.768716  541813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 11:47:44.784661  541813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0414 11:47:44.800752  541813 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0414 11:47:44.804677  541813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 11:47:44.816280  541813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:47:44.946972  541813 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:47:44.963519  541813 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466 for IP: 192.168.39.140
	I0414 11:47:44.963544  541813 certs.go:194] generating shared ca certs ...
	I0414 11:47:44.963575  541813 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:47:44.963779  541813 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 11:47:44.963871  541813 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 11:47:44.963893  541813 certs.go:256] generating profile certs ...
	I0414 11:47:44.964023  541813 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/client.key
	I0414 11:47:44.964117  541813 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/apiserver.key.53f37ab0
	I0414 11:47:44.964165  541813 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/proxy-client.key
	I0414 11:47:44.964327  541813 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 11:47:44.964367  541813 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 11:47:44.964381  541813 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 11:47:44.964418  541813 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 11:47:44.964452  541813 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 11:47:44.964486  541813 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 11:47:44.964537  541813 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:47:44.965403  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 11:47:45.000669  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 11:47:45.041888  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 11:47:45.068931  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 11:47:45.094353  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 11:47:45.118267  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 11:47:45.160700  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 11:47:45.201372  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 11:47:45.229855  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 11:47:45.252754  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 11:47:45.275570  541813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 11:47:45.297681  541813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 11:47:45.313657  541813 ssh_runner.go:195] Run: openssl version
	I0414 11:47:45.319228  541813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 11:47:45.329208  541813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:47:45.333362  541813 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:47:45.333417  541813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:47:45.338955  541813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 11:47:45.349269  541813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 11:47:45.359690  541813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 11:47:45.363927  541813 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 11:47:45.363981  541813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 11:47:45.370112  541813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 11:47:45.380487  541813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 11:47:45.390851  541813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 11:47:45.395150  541813 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 11:47:45.395228  541813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 11:47:45.400533  541813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 11:47:45.411047  541813 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 11:47:45.415642  541813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 11:47:45.421314  541813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 11:47:45.426977  541813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 11:47:45.432735  541813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 11:47:45.438742  541813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 11:47:45.444507  541813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 11:47:45.450178  541813 kubeadm.go:392] StartCluster: {Name:test-preload-112466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
112466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:47:45.450274  541813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 11:47:45.450321  541813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 11:47:45.486542  541813 cri.go:89] found id: ""
	I0414 11:47:45.486631  541813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 11:47:45.496492  541813 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 11:47:45.496518  541813 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 11:47:45.496567  541813 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 11:47:45.505630  541813 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 11:47:45.506090  541813 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-112466" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:47:45.506207  541813 kubeconfig.go:62] /home/jenkins/minikube-integration/20534-503273/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-112466" cluster setting kubeconfig missing "test-preload-112466" context setting]
	I0414 11:47:45.506472  541813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:47:45.507009  541813 kapi.go:59] client config for test-preload-112466: &rest.Config{Host:"https://192.168.39.140:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/client.crt", KeyFile:"/home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/client.key", CAFile:"/home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 11:47:45.507481  541813 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 11:47:45.507498  541813 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 11:47:45.507503  541813 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 11:47:45.507506  541813 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 11:47:45.507904  541813 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 11:47:45.516566  541813 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0414 11:47:45.516601  541813 kubeadm.go:1160] stopping kube-system containers ...
	I0414 11:47:45.516617  541813 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 11:47:45.516735  541813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 11:47:45.550253  541813 cri.go:89] found id: ""
	I0414 11:47:45.550326  541813 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 11:47:45.565796  541813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 11:47:45.575243  541813 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 11:47:45.575271  541813 kubeadm.go:157] found existing configuration files:
	
	I0414 11:47:45.575341  541813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 11:47:45.584179  541813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 11:47:45.584256  541813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 11:47:45.593552  541813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 11:47:45.602286  541813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 11:47:45.602343  541813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 11:47:45.611154  541813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 11:47:45.619535  541813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 11:47:45.619592  541813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 11:47:45.628405  541813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 11:47:45.637731  541813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 11:47:45.637805  541813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 11:47:45.647205  541813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 11:47:45.656932  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 11:47:45.742511  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 11:47:46.283444  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 11:47:46.540377  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 11:47:46.612038  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 11:47:46.689705  541813 api_server.go:52] waiting for apiserver process to appear ...
	I0414 11:47:46.689809  541813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:47:47.190851  541813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:47:47.690044  541813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:47:47.706272  541813 api_server.go:72] duration metric: took 1.016564839s to wait for apiserver process to appear ...
	I0414 11:47:47.706312  541813 api_server.go:88] waiting for apiserver healthz status ...
	I0414 11:47:47.706342  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:47:47.706966  541813 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0414 11:47:48.206638  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:47:51.417538  541813 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 11:47:51.417572  541813 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 11:47:51.417596  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:47:51.452332  541813 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 11:47:51.452362  541813 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 11:47:51.706839  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:47:51.713672  541813 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 11:47:51.713724  541813 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 11:47:52.207435  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:47:52.214296  541813 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 11:47:52.214328  541813 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 11:47:52.707143  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:47:52.715246  541813 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0414 11:47:52.725191  541813 api_server.go:141] control plane version: v1.24.4
	I0414 11:47:52.725224  541813 api_server.go:131] duration metric: took 5.018903654s to wait for apiserver health ...
	I0414 11:47:52.725234  541813 cni.go:84] Creating CNI manager for ""
	I0414 11:47:52.725240  541813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:47:52.727032  541813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 11:47:52.728308  541813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 11:47:52.747886  541813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 11:47:52.767728  541813 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 11:47:52.773096  541813 system_pods.go:59] 7 kube-system pods found
	I0414 11:47:52.773131  541813 system_pods.go:61] "coredns-6d4b75cb6d-vpsxg" [5f44b727-65c5-49e6-8fcd-2e7744162164] Running
	I0414 11:47:52.773140  541813 system_pods.go:61] "etcd-test-preload-112466" [a30166e6-26ef-451b-8534-fbb955d410cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 11:47:52.773144  541813 system_pods.go:61] "kube-apiserver-test-preload-112466" [4afd77b3-5cbf-4075-b56e-0275c00ed82d] Running
	I0414 11:47:52.773155  541813 system_pods.go:61] "kube-controller-manager-test-preload-112466" [691d98bf-bb9c-4328-a180-98962176ab4f] Running
	I0414 11:47:52.773158  541813 system_pods.go:61] "kube-proxy-8tfqc" [fbaa502e-bc22-43b9-9a89-30c5969681c4] Running
	I0414 11:47:52.773164  541813 system_pods.go:61] "kube-scheduler-test-preload-112466" [e3eb23ec-43c0-4735-84a0-d4c63a477b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 11:47:52.773168  541813 system_pods.go:61] "storage-provisioner" [fd871b1c-6af0-4ce4-b904-4edad25ab945] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 11:47:52.773175  541813 system_pods.go:74] duration metric: took 5.419212ms to wait for pod list to return data ...
	I0414 11:47:52.773183  541813 node_conditions.go:102] verifying NodePressure condition ...
	I0414 11:47:52.782272  541813 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 11:47:52.782303  541813 node_conditions.go:123] node cpu capacity is 2
	I0414 11:47:52.782317  541813 node_conditions.go:105] duration metric: took 9.1289ms to run NodePressure ...
	I0414 11:47:52.782336  541813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 11:47:53.028811  541813 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 11:47:53.031923  541813 kubeadm.go:739] kubelet initialised
	I0414 11:47:53.031945  541813 kubeadm.go:740] duration metric: took 3.098438ms waiting for restarted kubelet to initialise ...
	I0414 11:47:53.031957  541813 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 11:47:53.037158  541813 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace to be "Ready" ...
	I0414 11:47:53.057951  541813 pod_ready.go:98] node "test-preload-112466" hosting pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.057979  541813 pod_ready.go:82] duration metric: took 20.788272ms for pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace to be "Ready" ...
	E0414 11:47:53.057989  541813 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-112466" hosting pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.057996  541813 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:47:53.062533  541813 pod_ready.go:98] node "test-preload-112466" hosting pod "etcd-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.062556  541813 pod_ready.go:82] duration metric: took 4.548124ms for pod "etcd-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	E0414 11:47:53.062567  541813 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-112466" hosting pod "etcd-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.062575  541813 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:47:53.066280  541813 pod_ready.go:98] node "test-preload-112466" hosting pod "kube-apiserver-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.066319  541813 pod_ready.go:82] duration metric: took 3.734434ms for pod "kube-apiserver-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	E0414 11:47:53.066330  541813 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-112466" hosting pod "kube-apiserver-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.066338  541813 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:47:53.171703  541813 pod_ready.go:98] node "test-preload-112466" hosting pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.171737  541813 pod_ready.go:82] duration metric: took 105.388495ms for pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	E0414 11:47:53.171748  541813 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-112466" hosting pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.171756  541813 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8tfqc" in "kube-system" namespace to be "Ready" ...
	I0414 11:47:53.570257  541813 pod_ready.go:98] node "test-preload-112466" hosting pod "kube-proxy-8tfqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.570285  541813 pod_ready.go:82] duration metric: took 398.517799ms for pod "kube-proxy-8tfqc" in "kube-system" namespace to be "Ready" ...
	E0414 11:47:53.570294  541813 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-112466" hosting pod "kube-proxy-8tfqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.570301  541813 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:47:53.971359  541813 pod_ready.go:98] node "test-preload-112466" hosting pod "kube-scheduler-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.971391  541813 pod_ready.go:82] duration metric: took 401.083525ms for pod "kube-scheduler-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	E0414 11:47:53.971405  541813 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-112466" hosting pod "kube-scheduler-test-preload-112466" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:53.971416  541813 pod_ready.go:39] duration metric: took 939.446605ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 11:47:53.971445  541813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 11:47:53.982817  541813 ops.go:34] apiserver oom_adj: -16
	I0414 11:47:53.982842  541813 kubeadm.go:597] duration metric: took 8.486318677s to restartPrimaryControlPlane
	I0414 11:47:53.982851  541813 kubeadm.go:394] duration metric: took 8.532683954s to StartCluster
	I0414 11:47:53.982879  541813 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:47:53.982959  541813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:47:53.983743  541813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:47:53.983976  541813 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 11:47:53.984074  541813 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 11:47:53.984194  541813 addons.go:69] Setting storage-provisioner=true in profile "test-preload-112466"
	I0414 11:47:53.984214  541813 addons.go:69] Setting default-storageclass=true in profile "test-preload-112466"
	I0414 11:47:53.984227  541813 config.go:182] Loaded profile config "test-preload-112466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 11:47:53.984230  541813 addons.go:238] Setting addon storage-provisioner=true in "test-preload-112466"
	W0414 11:47:53.984278  541813 addons.go:247] addon storage-provisioner should already be in state true
	I0414 11:47:53.984301  541813 host.go:66] Checking if "test-preload-112466" exists ...
	I0414 11:47:53.984235  541813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-112466"
	I0414 11:47:53.984621  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:53.984672  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:53.984743  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:53.984793  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:53.986396  541813 out.go:177] * Verifying Kubernetes components...
	I0414 11:47:53.987788  541813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:47:54.000693  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0414 11:47:54.000724  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0414 11:47:54.001198  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:54.001300  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:54.001702  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:54.001720  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:54.001842  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:54.001880  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:54.002076  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:54.002267  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:54.002451  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetState
	I0414 11:47:54.002587  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:54.002624  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:54.004742  541813 kapi.go:59] client config for test-preload-112466: &rest.Config{Host:"https://192.168.39.140:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/client.crt", KeyFile:"/home/jenkins/minikube-integration/20534-503273/.minikube/profiles/test-preload-112466/client.key", CAFile:"/home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 11:47:54.005097  541813 addons.go:238] Setting addon default-storageclass=true in "test-preload-112466"
	W0414 11:47:54.005120  541813 addons.go:247] addon default-storageclass should already be in state true
	I0414 11:47:54.005147  541813 host.go:66] Checking if "test-preload-112466" exists ...
	I0414 11:47:54.005473  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:54.005535  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:54.018912  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0414 11:47:54.019547  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:54.020128  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:54.020152  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:54.020495  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:54.020706  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetState
	I0414 11:47:54.021354  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0414 11:47:54.021902  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:54.022442  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:54.022468  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:54.022689  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:54.022827  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:54.023325  541813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:47:54.023372  541813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:47:54.024644  541813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:47:54.026046  541813 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 11:47:54.026061  541813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 11:47:54.026076  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:54.029131  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:54.029561  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:54.029593  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:54.029735  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:54.029924  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:54.030095  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:54.030217  541813 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa Username:docker}
	I0414 11:47:54.071249  541813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I0414 11:47:54.071712  541813 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:47:54.072232  541813 main.go:141] libmachine: Using API Version  1
	I0414 11:47:54.072265  541813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:47:54.072683  541813 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:47:54.072913  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetState
	I0414 11:47:54.074739  541813 main.go:141] libmachine: (test-preload-112466) Calling .DriverName
	I0414 11:47:54.074952  541813 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 11:47:54.074972  541813 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 11:47:54.074991  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHHostname
	I0414 11:47:54.078124  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:54.078581  541813 main.go:141] libmachine: (test-preload-112466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d3:db", ip: ""} in network mk-test-preload-112466: {Iface:virbr1 ExpiryTime:2025-04-14 12:47:20 +0000 UTC Type:0 Mac:52:54:00:20:d3:db Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:test-preload-112466 Clientid:01:52:54:00:20:d3:db}
	I0414 11:47:54.078615  541813 main.go:141] libmachine: (test-preload-112466) DBG | domain test-preload-112466 has defined IP address 192.168.39.140 and MAC address 52:54:00:20:d3:db in network mk-test-preload-112466
	I0414 11:47:54.078748  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHPort
	I0414 11:47:54.078912  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHKeyPath
	I0414 11:47:54.079068  541813 main.go:141] libmachine: (test-preload-112466) Calling .GetSSHUsername
	I0414 11:47:54.079195  541813 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/test-preload-112466/id_rsa Username:docker}
	I0414 11:47:54.148671  541813 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:47:54.166402  541813 node_ready.go:35] waiting up to 6m0s for node "test-preload-112466" to be "Ready" ...
	I0414 11:47:54.238345  541813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 11:47:54.263798  541813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 11:47:55.229806  541813 main.go:141] libmachine: Making call to close driver server
	I0414 11:47:55.229828  541813 main.go:141] libmachine: (test-preload-112466) Calling .Close
	I0414 11:47:55.230142  541813 main.go:141] libmachine: Successfully made call to close driver server
	I0414 11:47:55.230161  541813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 11:47:55.230171  541813 main.go:141] libmachine: Making call to close driver server
	I0414 11:47:55.230178  541813 main.go:141] libmachine: (test-preload-112466) Calling .Close
	I0414 11:47:55.230144  541813 main.go:141] libmachine: (test-preload-112466) DBG | Closing plugin on server side
	I0414 11:47:55.230523  541813 main.go:141] libmachine: Successfully made call to close driver server
	I0414 11:47:55.230541  541813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 11:47:55.230538  541813 main.go:141] libmachine: (test-preload-112466) DBG | Closing plugin on server side
	I0414 11:47:55.234065  541813 main.go:141] libmachine: Making call to close driver server
	I0414 11:47:55.234091  541813 main.go:141] libmachine: (test-preload-112466) Calling .Close
	I0414 11:47:55.234354  541813 main.go:141] libmachine: Successfully made call to close driver server
	I0414 11:47:55.234369  541813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 11:47:55.234378  541813 main.go:141] libmachine: Making call to close driver server
	I0414 11:47:55.234384  541813 main.go:141] libmachine: (test-preload-112466) Calling .Close
	I0414 11:47:55.234636  541813 main.go:141] libmachine: Successfully made call to close driver server
	I0414 11:47:55.234655  541813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 11:47:55.234662  541813 main.go:141] libmachine: (test-preload-112466) DBG | Closing plugin on server side
	I0414 11:47:55.240091  541813 main.go:141] libmachine: Making call to close driver server
	I0414 11:47:55.240109  541813 main.go:141] libmachine: (test-preload-112466) Calling .Close
	I0414 11:47:55.240359  541813 main.go:141] libmachine: Successfully made call to close driver server
	I0414 11:47:55.240376  541813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 11:47:55.240403  541813 main.go:141] libmachine: (test-preload-112466) DBG | Closing plugin on server side
	I0414 11:47:55.243144  541813 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 11:47:55.244192  541813 addons.go:514] duration metric: took 1.260130212s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 11:47:56.169948  541813 node_ready.go:53] node "test-preload-112466" has status "Ready":"False"
	I0414 11:47:58.170478  541813 node_ready.go:53] node "test-preload-112466" has status "Ready":"False"
	I0414 11:48:00.170735  541813 node_ready.go:53] node "test-preload-112466" has status "Ready":"False"
	I0414 11:48:01.670875  541813 node_ready.go:49] node "test-preload-112466" has status "Ready":"True"
	I0414 11:48:01.670905  541813 node_ready.go:38] duration metric: took 7.504468786s for node "test-preload-112466" to be "Ready" ...
	I0414 11:48:01.670918  541813 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 11:48:01.677056  541813 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:01.681941  541813 pod_ready.go:93] pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace has status "Ready":"True"
	I0414 11:48:01.681966  541813 pod_ready.go:82] duration metric: took 4.878159ms for pod "coredns-6d4b75cb6d-vpsxg" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:01.681976  541813 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.687466  541813 pod_ready.go:93] pod "etcd-test-preload-112466" in "kube-system" namespace has status "Ready":"True"
	I0414 11:48:03.687492  541813 pod_ready.go:82] duration metric: took 2.005504559s for pod "etcd-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.687501  541813 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.692104  541813 pod_ready.go:93] pod "kube-apiserver-test-preload-112466" in "kube-system" namespace has status "Ready":"True"
	I0414 11:48:03.692134  541813 pod_ready.go:82] duration metric: took 4.625188ms for pod "kube-apiserver-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.692149  541813 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.696709  541813 pod_ready.go:93] pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace has status "Ready":"True"
	I0414 11:48:03.696734  541813 pod_ready.go:82] duration metric: took 4.576196ms for pod "kube-controller-manager-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.696747  541813 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8tfqc" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.700894  541813 pod_ready.go:93] pod "kube-proxy-8tfqc" in "kube-system" namespace has status "Ready":"True"
	I0414 11:48:03.700918  541813 pod_ready.go:82] duration metric: took 4.163847ms for pod "kube-proxy-8tfqc" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:03.700929  541813 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:04.070782  541813 pod_ready.go:93] pod "kube-scheduler-test-preload-112466" in "kube-system" namespace has status "Ready":"True"
	I0414 11:48:04.070838  541813 pod_ready.go:82] duration metric: took 369.900155ms for pod "kube-scheduler-test-preload-112466" in "kube-system" namespace to be "Ready" ...
	I0414 11:48:04.070854  541813 pod_ready.go:39] duration metric: took 2.399921182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 11:48:04.070877  541813 api_server.go:52] waiting for apiserver process to appear ...
	I0414 11:48:04.070941  541813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:48:04.085405  541813 api_server.go:72] duration metric: took 10.101385834s to wait for apiserver process to appear ...
	I0414 11:48:04.085434  541813 api_server.go:88] waiting for apiserver healthz status ...
	I0414 11:48:04.085486  541813 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0414 11:48:04.092018  541813 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0414 11:48:04.092930  541813 api_server.go:141] control plane version: v1.24.4
	I0414 11:48:04.092951  541813 api_server.go:131] duration metric: took 7.511433ms to wait for apiserver health ...
	I0414 11:48:04.092960  541813 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 11:48:04.272425  541813 system_pods.go:59] 7 kube-system pods found
	I0414 11:48:04.272457  541813 system_pods.go:61] "coredns-6d4b75cb6d-vpsxg" [5f44b727-65c5-49e6-8fcd-2e7744162164] Running
	I0414 11:48:04.272461  541813 system_pods.go:61] "etcd-test-preload-112466" [a30166e6-26ef-451b-8534-fbb955d410cf] Running
	I0414 11:48:04.272465  541813 system_pods.go:61] "kube-apiserver-test-preload-112466" [4afd77b3-5cbf-4075-b56e-0275c00ed82d] Running
	I0414 11:48:04.272468  541813 system_pods.go:61] "kube-controller-manager-test-preload-112466" [691d98bf-bb9c-4328-a180-98962176ab4f] Running
	I0414 11:48:04.272470  541813 system_pods.go:61] "kube-proxy-8tfqc" [fbaa502e-bc22-43b9-9a89-30c5969681c4] Running
	I0414 11:48:04.272473  541813 system_pods.go:61] "kube-scheduler-test-preload-112466" [e3eb23ec-43c0-4735-84a0-d4c63a477b5e] Running
	I0414 11:48:04.272476  541813 system_pods.go:61] "storage-provisioner" [fd871b1c-6af0-4ce4-b904-4edad25ab945] Running
	I0414 11:48:04.272482  541813 system_pods.go:74] duration metric: took 179.516605ms to wait for pod list to return data ...
	I0414 11:48:04.272490  541813 default_sa.go:34] waiting for default service account to be created ...
	I0414 11:48:04.470407  541813 default_sa.go:45] found service account: "default"
	I0414 11:48:04.470435  541813 default_sa.go:55] duration metric: took 197.93852ms for default service account to be created ...
	I0414 11:48:04.470445  541813 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 11:48:04.671858  541813 system_pods.go:86] 7 kube-system pods found
	I0414 11:48:04.671889  541813 system_pods.go:89] "coredns-6d4b75cb6d-vpsxg" [5f44b727-65c5-49e6-8fcd-2e7744162164] Running
	I0414 11:48:04.671895  541813 system_pods.go:89] "etcd-test-preload-112466" [a30166e6-26ef-451b-8534-fbb955d410cf] Running
	I0414 11:48:04.671898  541813 system_pods.go:89] "kube-apiserver-test-preload-112466" [4afd77b3-5cbf-4075-b56e-0275c00ed82d] Running
	I0414 11:48:04.671902  541813 system_pods.go:89] "kube-controller-manager-test-preload-112466" [691d98bf-bb9c-4328-a180-98962176ab4f] Running
	I0414 11:48:04.671905  541813 system_pods.go:89] "kube-proxy-8tfqc" [fbaa502e-bc22-43b9-9a89-30c5969681c4] Running
	I0414 11:48:04.671908  541813 system_pods.go:89] "kube-scheduler-test-preload-112466" [e3eb23ec-43c0-4735-84a0-d4c63a477b5e] Running
	I0414 11:48:04.671912  541813 system_pods.go:89] "storage-provisioner" [fd871b1c-6af0-4ce4-b904-4edad25ab945] Running
	I0414 11:48:04.671918  541813 system_pods.go:126] duration metric: took 201.468234ms to wait for k8s-apps to be running ...
	I0414 11:48:04.671926  541813 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 11:48:04.671972  541813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:48:04.686000  541813 system_svc.go:56] duration metric: took 14.064346ms WaitForService to wait for kubelet
	I0414 11:48:04.686028  541813 kubeadm.go:582] duration metric: took 10.702026932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 11:48:04.686048  541813 node_conditions.go:102] verifying NodePressure condition ...
	I0414 11:48:04.870638  541813 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 11:48:04.870664  541813 node_conditions.go:123] node cpu capacity is 2
	I0414 11:48:04.870677  541813 node_conditions.go:105] duration metric: took 184.624981ms to run NodePressure ...
	I0414 11:48:04.870691  541813 start.go:241] waiting for startup goroutines ...
	I0414 11:48:04.870698  541813 start.go:246] waiting for cluster config update ...
	I0414 11:48:04.870709  541813 start.go:255] writing updated cluster config ...
	I0414 11:48:04.870969  541813 ssh_runner.go:195] Run: rm -f paused
	I0414 11:48:04.923724  541813 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0414 11:48:04.925567  541813 out.go:201] 
	W0414 11:48:04.926726  541813 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0414 11:48:04.927725  541813 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0414 11:48:04.928747  541813 out.go:177] * Done! kubectl is now configured to use "test-preload-112466" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.830161442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=523b8673-5c48-4eec-bfb3-09b9c7dbb618 name=/runtime.v1.RuntimeService/Version
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.831292457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2199fcbb-906a-4e83-a98d-c3ef8f9bdf82 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.831959964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744631285831935239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2199fcbb-906a-4e83-a98d-c3ef8f9bdf82 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.832555814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25af2c93-be08-4818-808a-b9204b087eca name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.832601830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25af2c93-be08-4818-808a-b9204b087eca name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.832772003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eece1b744c7e221e6fda0a13a65b1c771be569f031099b3fa3f38b6b84cc43b,PodSandboxId:8971098f7978dbdf50814384f216f95f45de904676a765742daeb151af4d1138,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744631279779176369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vpsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f44b727-65c5-49e6-8fcd-2e7744162164,},Annotations:map[string]string{io.kubernetes.container.hash: 54093302,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4645d03c50d719fe690873e41d28af9715ea1cc3260d4c376f64e74f5afdbcac,PodSandboxId:c401bfb55a6ab49ce3bf82a0ced183e3ffa9063ab1bdbd73cd515a1f1abbd1f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744631272688206730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fd871b1c-6af0-4ce4-b904-4edad25ab945,},Annotations:map[string]string{io.kubernetes.container.hash: 8412a78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7da3456e28069c286c3e3dcf467a009872e8146467de1c083c7265236bb67f64,PodSandboxId:0b98981e1fdf9e6764e647593d2ab954255c0dea19eded562c830ccfe31c68d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744631272347267393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8tfqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
aa502e-bc22-43b9-9a89-30c5969681c4,},Annotations:map[string]string{io.kubernetes.container.hash: f81c1425,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07eb2f2bd5c23fce1a350774433b6b58ce309053a13befc9a69157dd534547f,PodSandboxId:128a57107da41fa26af37ece85e112574a5e68c38d1a5535e6bf70fbff7f86b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744631267439747739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc3890bc9609409b4fd2895be968b85,},Anno
tations:map[string]string{io.kubernetes.container.hash: b9eb0196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e051a077fed23021524213dcb2ac17be7f22b534aed5c2938fd988fd3148ec,PodSandboxId:ddffdcf4256a5d54ab4e8bde124b2a4afaa5f780afd5ce658fd72d26ca5640f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744631267417212700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e07aa1de26dbf69a4007e
20411db7d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fd2ef25f83240f987215891bea1cdbdd89913f7a4a707f7d3ad9f90a62b337,PodSandboxId:cbc7b0323a6b8cd945b571e2d5e7af4f6a76eebe5b30390eb4c1a7b1798c241a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744631267372953736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e91e724c383c218d4f90fc096b97cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f5e394cbd7d9cbc64640ab63909aceb1f82e9f02d76b8ae7946f54930dda96,PodSandboxId:a079fc05bfd7fb1767a72dfb6ad06628f80015e7acea2c59205e41e8911aa882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744631267313206042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a2b0d394f1ab5e035ba88ae191e4019,},Annotation
s:map[string]string{io.kubernetes.container.hash: aa0581ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25af2c93-be08-4818-808a-b9204b087eca name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.843508135Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=831d5a87-d094-440c-ae20-60188b3b7555 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.843683201Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8971098f7978dbdf50814384f216f95f45de904676a765742daeb151af4d1138,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-vpsxg,Uid:5f44b727-65c5-49e6-8fcd-2e7744162164,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631279568255385,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-vpsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f44b727-65c5-49e6-8fcd-2e7744162164,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T11:47:51.652245292Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c401bfb55a6ab49ce3bf82a0ced183e3ffa9063ab1bdbd73cd515a1f1abbd1f3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fd871b1c-6af0-4ce4-b904-4edad25ab945,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631272559845843,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd871b1c-6af0-4ce4-b904-4edad25ab945,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-14T11:47:51.652244249Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b98981e1fdf9e6764e647593d2ab954255c0dea19eded562c830ccfe31c68d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-8tfqc,Uid:fbaa502e-bc22-43b9-9a89-30c5969681c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631272263178155,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8tfqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbaa502e-bc22-43b9-9a89-30c5969681c4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T11:47:51.652241900Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ddffdcf4256a5d54ab4e8bde124b2a4afaa5f780afd5ce658fd72d26ca5640f4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-112466,Ui
d:33e07aa1de26dbf69a4007e20411db7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631267199878934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e07aa1de26dbf69a4007e20411db7d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 33e07aa1de26dbf69a4007e20411db7d,kubernetes.io/config.seen: 2025-04-14T11:47:46.651334543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:128a57107da41fa26af37ece85e112574a5e68c38d1a5535e6bf70fbff7f86b9,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-112466,Uid:ebc3890bc9609409b4fd2895be968b85,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631267199427458,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-112466,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ebc3890bc9609409b4fd2895be968b85,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.140:2379,kubernetes.io/config.hash: ebc3890bc9609409b4fd2895be968b85,kubernetes.io/config.seen: 2025-04-14T11:47:46.704631573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cbc7b0323a6b8cd945b571e2d5e7af4f6a76eebe5b30390eb4c1a7b1798c241a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-112466,Uid:28e91e724c383c218d4f90fc096b97cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631267196985010,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e91e724c383c218d4f90fc096b97cc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28e91e724c383c218d4f90fc096b97cc,kubernetes.io/config.seen: 2025-04-14T11
:47:46.651335570Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a079fc05bfd7fb1767a72dfb6ad06628f80015e7acea2c59205e41e8911aa882,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-112466,Uid:2a2b0d394f1ab5e035ba88ae191e4019,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744631267173421633,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a2b0d394f1ab5e035ba88ae191e4019,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.140:8443,kubernetes.io/config.hash: 2a2b0d394f1ab5e035ba88ae191e4019,kubernetes.io/config.seen: 2025-04-14T11:47:46.651305231Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=831d5a87-d094-440c-ae20-60188b3b7555 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.844269041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=faaf30ee-4b95-40e4-a72e-6704e6721c29 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.844319396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=faaf30ee-4b95-40e4-a72e-6704e6721c29 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.844635354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eece1b744c7e221e6fda0a13a65b1c771be569f031099b3fa3f38b6b84cc43b,PodSandboxId:8971098f7978dbdf50814384f216f95f45de904676a765742daeb151af4d1138,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744631279779176369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vpsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f44b727-65c5-49e6-8fcd-2e7744162164,},Annotations:map[string]string{io.kubernetes.container.hash: 54093302,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4645d03c50d719fe690873e41d28af9715ea1cc3260d4c376f64e74f5afdbcac,PodSandboxId:c401bfb55a6ab49ce3bf82a0ced183e3ffa9063ab1bdbd73cd515a1f1abbd1f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744631272688206730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fd871b1c-6af0-4ce4-b904-4edad25ab945,},Annotations:map[string]string{io.kubernetes.container.hash: 8412a78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7da3456e28069c286c3e3dcf467a009872e8146467de1c083c7265236bb67f64,PodSandboxId:0b98981e1fdf9e6764e647593d2ab954255c0dea19eded562c830ccfe31c68d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744631272347267393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8tfqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
aa502e-bc22-43b9-9a89-30c5969681c4,},Annotations:map[string]string{io.kubernetes.container.hash: f81c1425,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07eb2f2bd5c23fce1a350774433b6b58ce309053a13befc9a69157dd534547f,PodSandboxId:128a57107da41fa26af37ece85e112574a5e68c38d1a5535e6bf70fbff7f86b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744631267439747739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc3890bc9609409b4fd2895be968b85,},Anno
tations:map[string]string{io.kubernetes.container.hash: b9eb0196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e051a077fed23021524213dcb2ac17be7f22b534aed5c2938fd988fd3148ec,PodSandboxId:ddffdcf4256a5d54ab4e8bde124b2a4afaa5f780afd5ce658fd72d26ca5640f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744631267417212700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e07aa1de26dbf69a4007e
20411db7d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fd2ef25f83240f987215891bea1cdbdd89913f7a4a707f7d3ad9f90a62b337,PodSandboxId:cbc7b0323a6b8cd945b571e2d5e7af4f6a76eebe5b30390eb4c1a7b1798c241a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744631267372953736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e91e724c383c218d4f90fc096b97cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f5e394cbd7d9cbc64640ab63909aceb1f82e9f02d76b8ae7946f54930dda96,PodSandboxId:a079fc05bfd7fb1767a72dfb6ad06628f80015e7acea2c59205e41e8911aa882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744631267313206042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a2b0d394f1ab5e035ba88ae191e4019,},Annotation
s:map[string]string{io.kubernetes.container.hash: aa0581ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=faaf30ee-4b95-40e4-a72e-6704e6721c29 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.868081532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f4eca21-6de1-422d-92cf-ae8ef161669a name=/runtime.v1.RuntimeService/Version
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.868163900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f4eca21-6de1-422d-92cf-ae8ef161669a name=/runtime.v1.RuntimeService/Version
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.869105163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47189edc-7025-4ac3-b6d3-a193d3065105 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.869599358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744631285869574651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47189edc-7025-4ac3-b6d3-a193d3065105 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.870111052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a3c84aa-0d33-4e70-974e-22a80f2cc71a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.870161902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a3c84aa-0d33-4e70-974e-22a80f2cc71a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.870325280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eece1b744c7e221e6fda0a13a65b1c771be569f031099b3fa3f38b6b84cc43b,PodSandboxId:8971098f7978dbdf50814384f216f95f45de904676a765742daeb151af4d1138,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744631279779176369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vpsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f44b727-65c5-49e6-8fcd-2e7744162164,},Annotations:map[string]string{io.kubernetes.container.hash: 54093302,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4645d03c50d719fe690873e41d28af9715ea1cc3260d4c376f64e74f5afdbcac,PodSandboxId:c401bfb55a6ab49ce3bf82a0ced183e3ffa9063ab1bdbd73cd515a1f1abbd1f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744631272688206730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fd871b1c-6af0-4ce4-b904-4edad25ab945,},Annotations:map[string]string{io.kubernetes.container.hash: 8412a78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7da3456e28069c286c3e3dcf467a009872e8146467de1c083c7265236bb67f64,PodSandboxId:0b98981e1fdf9e6764e647593d2ab954255c0dea19eded562c830ccfe31c68d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744631272347267393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8tfqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
aa502e-bc22-43b9-9a89-30c5969681c4,},Annotations:map[string]string{io.kubernetes.container.hash: f81c1425,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07eb2f2bd5c23fce1a350774433b6b58ce309053a13befc9a69157dd534547f,PodSandboxId:128a57107da41fa26af37ece85e112574a5e68c38d1a5535e6bf70fbff7f86b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744631267439747739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc3890bc9609409b4fd2895be968b85,},Anno
tations:map[string]string{io.kubernetes.container.hash: b9eb0196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e051a077fed23021524213dcb2ac17be7f22b534aed5c2938fd988fd3148ec,PodSandboxId:ddffdcf4256a5d54ab4e8bde124b2a4afaa5f780afd5ce658fd72d26ca5640f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744631267417212700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e07aa1de26dbf69a4007e
20411db7d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fd2ef25f83240f987215891bea1cdbdd89913f7a4a707f7d3ad9f90a62b337,PodSandboxId:cbc7b0323a6b8cd945b571e2d5e7af4f6a76eebe5b30390eb4c1a7b1798c241a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744631267372953736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e91e724c383c218d4f90fc096b97cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f5e394cbd7d9cbc64640ab63909aceb1f82e9f02d76b8ae7946f54930dda96,PodSandboxId:a079fc05bfd7fb1767a72dfb6ad06628f80015e7acea2c59205e41e8911aa882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744631267313206042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a2b0d394f1ab5e035ba88ae191e4019,},Annotation
s:map[string]string{io.kubernetes.container.hash: aa0581ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a3c84aa-0d33-4e70-974e-22a80f2cc71a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.900340521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=808bbccc-be26-4c2e-bb4e-c7edd0f6be23 name=/runtime.v1.RuntimeService/Version
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.900442974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=808bbccc-be26-4c2e-bb4e-c7edd0f6be23 name=/runtime.v1.RuntimeService/Version
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.901333159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f8b8253-1d36-49b8-9789-fdf496300551 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.901823621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744631285901803354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f8b8253-1d36-49b8-9789-fdf496300551 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.902287539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=892a34fa-97c7-4110-b2dd-5be438f63690 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.902384708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=892a34fa-97c7-4110-b2dd-5be438f63690 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 11:48:05 test-preload-112466 crio[675]: time="2025-04-14 11:48:05.902541302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eece1b744c7e221e6fda0a13a65b1c771be569f031099b3fa3f38b6b84cc43b,PodSandboxId:8971098f7978dbdf50814384f216f95f45de904676a765742daeb151af4d1138,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744631279779176369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vpsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f44b727-65c5-49e6-8fcd-2e7744162164,},Annotations:map[string]string{io.kubernetes.container.hash: 54093302,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4645d03c50d719fe690873e41d28af9715ea1cc3260d4c376f64e74f5afdbcac,PodSandboxId:c401bfb55a6ab49ce3bf82a0ced183e3ffa9063ab1bdbd73cd515a1f1abbd1f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744631272688206730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fd871b1c-6af0-4ce4-b904-4edad25ab945,},Annotations:map[string]string{io.kubernetes.container.hash: 8412a78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7da3456e28069c286c3e3dcf467a009872e8146467de1c083c7265236bb67f64,PodSandboxId:0b98981e1fdf9e6764e647593d2ab954255c0dea19eded562c830ccfe31c68d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744631272347267393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8tfqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
aa502e-bc22-43b9-9a89-30c5969681c4,},Annotations:map[string]string{io.kubernetes.container.hash: f81c1425,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07eb2f2bd5c23fce1a350774433b6b58ce309053a13befc9a69157dd534547f,PodSandboxId:128a57107da41fa26af37ece85e112574a5e68c38d1a5535e6bf70fbff7f86b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744631267439747739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc3890bc9609409b4fd2895be968b85,},Anno
tations:map[string]string{io.kubernetes.container.hash: b9eb0196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e051a077fed23021524213dcb2ac17be7f22b534aed5c2938fd988fd3148ec,PodSandboxId:ddffdcf4256a5d54ab4e8bde124b2a4afaa5f780afd5ce658fd72d26ca5640f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744631267417212700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e07aa1de26dbf69a4007e
20411db7d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fd2ef25f83240f987215891bea1cdbdd89913f7a4a707f7d3ad9f90a62b337,PodSandboxId:cbc7b0323a6b8cd945b571e2d5e7af4f6a76eebe5b30390eb4c1a7b1798c241a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744631267372953736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e91e724c383c218d4f90fc096b97cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f5e394cbd7d9cbc64640ab63909aceb1f82e9f02d76b8ae7946f54930dda96,PodSandboxId:a079fc05bfd7fb1767a72dfb6ad06628f80015e7acea2c59205e41e8911aa882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744631267313206042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-112466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a2b0d394f1ab5e035ba88ae191e4019,},Annotation
s:map[string]string{io.kubernetes.container.hash: aa0581ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=892a34fa-97c7-4110-b2dd-5be438f63690 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5eece1b744c7e       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   8971098f7978d       coredns-6d4b75cb6d-vpsxg
	4645d03c50d71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   c401bfb55a6ab       storage-provisioner
	7da3456e28069       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   0b98981e1fdf9       kube-proxy-8tfqc
	d07eb2f2bd5c2       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   128a57107da41       etcd-test-preload-112466
	28e051a077fed       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   ddffdcf4256a5       kube-controller-manager-test-preload-112466
	f1fd2ef25f832       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   cbc7b0323a6b8       kube-scheduler-test-preload-112466
	a3f5e394cbd7d       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   a079fc05bfd7f       kube-apiserver-test-preload-112466
	
	
	==> coredns [5eece1b744c7e221e6fda0a13a65b1c771be569f031099b3fa3f38b6b84cc43b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:48646 - 10565 "HINFO IN 2336520315780130034.2393377419730347886. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031334599s
	
	
	==> describe nodes <==
	Name:               test-preload-112466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-112466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=test-preload-112466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T11_45_58_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 11:45:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-112466
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 11:48:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 11:48:01 +0000   Mon, 14 Apr 2025 11:45:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 11:48:01 +0000   Mon, 14 Apr 2025 11:45:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 11:48:01 +0000   Mon, 14 Apr 2025 11:45:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 11:48:01 +0000   Mon, 14 Apr 2025 11:48:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    test-preload-112466
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2961a858c4fa44679030d2f2ff3cd0d9
	  System UUID:                2961a858-c4fa-4467-9030-d2f2ff3cd0d9
	  Boot ID:                    4845152d-8130-4080-ba48-a8587c8a6601
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vpsxg                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     115s
	  kube-system                 etcd-test-preload-112466                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m9s
	  kube-system                 kube-apiserver-test-preload-112466             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-test-preload-112466    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-8tfqc                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-test-preload-112466             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m16s (x5 over 2m16s)  kubelet          Node test-preload-112466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x4 over 2m16s)  kubelet          Node test-preload-112466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x4 over 2m16s)  kubelet          Node test-preload-112466 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node test-preload-112466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node test-preload-112466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node test-preload-112466 status is now: NodeHasSufficientPID
	  Normal  NodeReady                118s                   kubelet          Node test-preload-112466 status is now: NodeReady
	  Normal  RegisteredNode           116s                   node-controller  Node test-preload-112466 event: Registered Node test-preload-112466 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)      kubelet          Node test-preload-112466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)      kubelet          Node test-preload-112466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)      kubelet          Node test-preload-112466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-112466 event: Registered Node test-preload-112466 in Controller
	
	
	==> dmesg <==
	[Apr14 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038476] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.838938] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.068846] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529589] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.526311] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.064382] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059364] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.182988] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114632] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.258429] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[ +13.112777] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.063428] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.517241] systemd-fstab-generator[1122]: Ignoring "noauto" option for root device
	[  +4.420863] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.170791] systemd-fstab-generator[1784]: Ignoring "noauto" option for root device
	[  +5.547263] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d07eb2f2bd5c23fce1a350774433b6b58ce309053a13befc9a69157dd534547f] <==
	{"level":"info","ts":"2025-04-14T11:47:47.782Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d94bec2e0ded43ac","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T11:47:47.800Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-14T11:47:47.808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac switched to configuration voters=(15657868212029965228)"}
	{"level":"info","ts":"2025-04-14T11:47:47.808Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2025-04-14T11:47:47.808Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T11:47:47.808Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T11:47:47.811Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T11:47:47.814Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T11:47:47.814Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T11:47:47.814Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2025-04-14T11:47:47.817Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 3"}
	{"level":"info","ts":"2025-04-14T11:47:49.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2025-04-14T11:47:49.139Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:test-preload-112466 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T11:47:49.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:47:49.140Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:47:49.141Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T11:47:49.142Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2025-04-14T11:47:49.142Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T11:47:49.142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:48:06 up 0 min,  0 users,  load average: 0.94, 0.25, 0.08
	Linux test-preload-112466 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a3f5e394cbd7d9cbc64640ab63909aceb1f82e9f02d76b8ae7946f54930dda96] <==
	I0414 11:47:51.406682       1 controller.go:85] Starting OpenAPI controller
	I0414 11:47:51.406753       1 controller.go:85] Starting OpenAPI V3 controller
	I0414 11:47:51.406823       1 naming_controller.go:291] Starting NamingConditionController
	I0414 11:47:51.406872       1 establishing_controller.go:76] Starting EstablishingController
	I0414 11:47:51.406906       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0414 11:47:51.406939       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0414 11:47:51.406969       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0414 11:47:51.456691       1 cache.go:39] Caches are synced for autoregister controller
	I0414 11:47:51.472505       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0414 11:47:51.475767       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0414 11:47:51.485260       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 11:47:51.498859       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0414 11:47:51.499509       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0414 11:47:51.501047       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0414 11:47:51.566665       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 11:47:52.063672       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0414 11:47:52.368511       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 11:47:52.722094       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0414 11:47:52.943727       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0414 11:47:52.955869       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0414 11:47:52.994132       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0414 11:47:53.008563       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 11:47:53.014636       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 11:48:03.893792       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 11:48:03.899634       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [28e051a077fed23021524213dcb2ac17be7f22b534aed5c2938fd988fd3148ec] <==
	I0414 11:48:03.871823       1 shared_informer.go:262] Caches are synced for persistent volume
	I0414 11:48:03.876984       1 shared_informer.go:262] Caches are synced for crt configmap
	I0414 11:48:03.878283       1 shared_informer.go:262] Caches are synced for ephemeral
	I0414 11:48:03.881505       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0414 11:48:03.881544       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0414 11:48:03.882581       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0414 11:48:03.883914       1 shared_informer.go:262] Caches are synced for service account
	I0414 11:48:03.887255       1 shared_informer.go:262] Caches are synced for PV protection
	I0414 11:48:03.887390       1 shared_informer.go:262] Caches are synced for daemon sets
	I0414 11:48:03.889790       1 shared_informer.go:262] Caches are synced for endpoint
	I0414 11:48:03.895093       1 shared_informer.go:262] Caches are synced for HPA
	I0414 11:48:03.896781       1 shared_informer.go:262] Caches are synced for TTL
	I0414 11:48:03.898697       1 shared_informer.go:262] Caches are synced for expand
	I0414 11:48:03.908983       1 shared_informer.go:262] Caches are synced for namespace
	I0414 11:48:03.917065       1 shared_informer.go:262] Caches are synced for stateful set
	I0414 11:48:03.936484       1 shared_informer.go:262] Caches are synced for GC
	I0414 11:48:03.939093       1 shared_informer.go:262] Caches are synced for job
	I0414 11:48:03.941420       1 shared_informer.go:262] Caches are synced for deployment
	I0414 11:48:03.942645       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0414 11:48:03.984719       1 shared_informer.go:262] Caches are synced for attach detach
	I0414 11:48:04.068816       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 11:48:04.111467       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 11:48:04.514968       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 11:48:04.515001       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0414 11:48:04.553589       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [7da3456e28069c286c3e3dcf467a009872e8146467de1c083c7265236bb67f64] <==
	I0414 11:47:52.663661       1 node.go:163] Successfully retrieved node IP: 192.168.39.140
	I0414 11:47:52.663741       1 server_others.go:138] "Detected node IP" address="192.168.39.140"
	I0414 11:47:52.663779       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0414 11:47:52.704276       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0414 11:47:52.704307       1 server_others.go:206] "Using iptables Proxier"
	I0414 11:47:52.704394       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0414 11:47:52.705730       1 server.go:661] "Version info" version="v1.24.4"
	I0414 11:47:52.705754       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:47:52.712391       1 config.go:317] "Starting service config controller"
	I0414 11:47:52.712428       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0414 11:47:52.712477       1 config.go:226] "Starting endpoint slice config controller"
	I0414 11:47:52.712482       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0414 11:47:52.718275       1 config.go:444] "Starting node config controller"
	I0414 11:47:52.718309       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0414 11:47:52.812633       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0414 11:47:52.812706       1 shared_informer.go:262] Caches are synced for service config
	I0414 11:47:52.818702       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f1fd2ef25f83240f987215891bea1cdbdd89913f7a4a707f7d3ad9f90a62b337] <==
	I0414 11:47:48.589062       1 serving.go:348] Generated self-signed cert in-memory
	W0414 11:47:51.418423       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 11:47:51.418769       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 11:47:51.418871       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 11:47:51.418901       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 11:47:51.469565       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0414 11:47:51.469599       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:47:51.479023       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0414 11:47:51.481105       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 11:47:51.481153       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:47:51.481214       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0414 11:47:51.581599       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.649159    1129 apiserver.go:52] "Watching apiserver"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.652805    1129 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.653016    1129 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.653122    1129 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: E0414 11:47:51.654686    1129 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vpsxg" podUID=5f44b727-65c5-49e6-8fcd-2e7744162164
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.727820    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbaa502e-bc22-43b9-9a89-30c5969681c4-kube-proxy\") pod \"kube-proxy-8tfqc\" (UID: \"fbaa502e-bc22-43b9-9a89-30c5969681c4\") " pod="kube-system/kube-proxy-8tfqc"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.727996    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbaa502e-bc22-43b9-9a89-30c5969681c4-lib-modules\") pod \"kube-proxy-8tfqc\" (UID: \"fbaa502e-bc22-43b9-9a89-30c5969681c4\") " pod="kube-system/kube-proxy-8tfqc"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728023    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbaa502e-bc22-43b9-9a89-30c5969681c4-xtables-lock\") pod \"kube-proxy-8tfqc\" (UID: \"fbaa502e-bc22-43b9-9a89-30c5969681c4\") " pod="kube-system/kube-proxy-8tfqc"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728106    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpdj\" (UniqueName: \"kubernetes.io/projected/fbaa502e-bc22-43b9-9a89-30c5969681c4-kube-api-access-5cpdj\") pod \"kube-proxy-8tfqc\" (UID: \"fbaa502e-bc22-43b9-9a89-30c5969681c4\") " pod="kube-system/kube-proxy-8tfqc"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728201    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume\") pod \"coredns-6d4b75cb6d-vpsxg\" (UID: \"5f44b727-65c5-49e6-8fcd-2e7744162164\") " pod="kube-system/coredns-6d4b75cb6d-vpsxg"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728288    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtms\" (UniqueName: \"kubernetes.io/projected/5f44b727-65c5-49e6-8fcd-2e7744162164-kube-api-access-cvtms\") pod \"coredns-6d4b75cb6d-vpsxg\" (UID: \"5f44b727-65c5-49e6-8fcd-2e7744162164\") " pod="kube-system/coredns-6d4b75cb6d-vpsxg"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728415    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fd871b1c-6af0-4ce4-b904-4edad25ab945-tmp\") pod \"storage-provisioner\" (UID: \"fd871b1c-6af0-4ce4-b904-4edad25ab945\") " pod="kube-system/storage-provisioner"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728446    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmzn\" (UniqueName: \"kubernetes.io/projected/fd871b1c-6af0-4ce4-b904-4edad25ab945-kube-api-access-mfmzn\") pod \"storage-provisioner\" (UID: \"fd871b1c-6af0-4ce4-b904-4edad25ab945\") " pod="kube-system/storage-provisioner"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: I0414 11:47:51.728531    1129 reconciler.go:159] "Reconciler: start to sync state"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: E0414 11:47:51.728701    1129 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: E0414 11:47:51.832066    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 11:47:51 test-preload-112466 kubelet[1129]: E0414 11:47:51.832292    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume podName:5f44b727-65c5-49e6-8fcd-2e7744162164 nodeName:}" failed. No retries permitted until 2025-04-14 11:47:52.332258953 +0000 UTC m=+5.799956920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume") pod "coredns-6d4b75cb6d-vpsxg" (UID: "5f44b727-65c5-49e6-8fcd-2e7744162164") : object "kube-system"/"coredns" not registered
	Apr 14 11:47:52 test-preload-112466 kubelet[1129]: E0414 11:47:52.336153    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 11:47:52 test-preload-112466 kubelet[1129]: E0414 11:47:52.336206    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume podName:5f44b727-65c5-49e6-8fcd-2e7744162164 nodeName:}" failed. No retries permitted until 2025-04-14 11:47:53.336192489 +0000 UTC m=+6.803890435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume") pod "coredns-6d4b75cb6d-vpsxg" (UID: "5f44b727-65c5-49e6-8fcd-2e7744162164") : object "kube-system"/"coredns" not registered
	Apr 14 11:47:53 test-preload-112466 kubelet[1129]: E0414 11:47:53.345453    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 11:47:53 test-preload-112466 kubelet[1129]: E0414 11:47:53.345549    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume podName:5f44b727-65c5-49e6-8fcd-2e7744162164 nodeName:}" failed. No retries permitted until 2025-04-14 11:47:55.345531804 +0000 UTC m=+8.813229752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume") pod "coredns-6d4b75cb6d-vpsxg" (UID: "5f44b727-65c5-49e6-8fcd-2e7744162164") : object "kube-system"/"coredns" not registered
	Apr 14 11:47:53 test-preload-112466 kubelet[1129]: E0414 11:47:53.762130    1129 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vpsxg" podUID=5f44b727-65c5-49e6-8fcd-2e7744162164
	Apr 14 11:47:55 test-preload-112466 kubelet[1129]: E0414 11:47:55.362237    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 11:47:55 test-preload-112466 kubelet[1129]: E0414 11:47:55.362407    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume podName:5f44b727-65c5-49e6-8fcd-2e7744162164 nodeName:}" failed. No retries permitted until 2025-04-14 11:47:59.362384725 +0000 UTC m=+12.830082691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5f44b727-65c5-49e6-8fcd-2e7744162164-config-volume") pod "coredns-6d4b75cb6d-vpsxg" (UID: "5f44b727-65c5-49e6-8fcd-2e7744162164") : object "kube-system"/"coredns" not registered
	Apr 14 11:47:55 test-preload-112466 kubelet[1129]: E0414 11:47:55.762286    1129 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vpsxg" podUID=5f44b727-65c5-49e6-8fcd-2e7744162164
	
	
	==> storage-provisioner [4645d03c50d719fe690873e41d28af9715ea1cc3260d4c376f64e74f5afdbcac] <==
	I0414 11:47:52.833939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-112466 -n test-preload-112466
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-112466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-112466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-112466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-112466: (1.191901073s)
--- FAIL: TestPreload (199.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (411.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m35.380086529s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-943444] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-943444" primary control-plane node in "kubernetes-upgrade-943444" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:50:11.802048  545672 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:50:11.802149  545672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:50:11.802157  545672 out.go:358] Setting ErrFile to fd 2...
	I0414 11:50:11.802160  545672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:50:11.802348  545672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:50:11.802964  545672 out.go:352] Setting JSON to false
	I0414 11:50:11.803909  545672 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":19963,"bootTime":1744611449,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:50:11.804027  545672 start.go:139] virtualization: kvm guest
	I0414 11:50:11.805963  545672 out.go:177] * [kubernetes-upgrade-943444] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:50:11.807322  545672 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:50:11.807343  545672 notify.go:220] Checking for updates...
	I0414 11:50:11.809590  545672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:50:11.810710  545672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:50:11.811806  545672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:50:11.813167  545672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:50:11.814455  545672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:50:11.816080  545672 config.go:182] Loaded profile config "NoKubernetes-223451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:50:11.816176  545672 config.go:182] Loaded profile config "force-systemd-env-233929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:50:11.816260  545672 config.go:182] Loaded profile config "offline-crio-209305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:50:11.816342  545672 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:50:11.852289  545672 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 11:50:11.853411  545672 start.go:297] selected driver: kvm2
	I0414 11:50:11.853428  545672 start.go:901] validating driver "kvm2" against <nil>
	I0414 11:50:11.853444  545672 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:50:11.854472  545672 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:50:11.854561  545672 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 11:50:11.870720  545672 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 11:50:11.870777  545672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 11:50:11.871035  545672 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 11:50:11.871084  545672 cni.go:84] Creating CNI manager for ""
	I0414 11:50:11.871148  545672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:50:11.871160  545672 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 11:50:11.871222  545672 start.go:340] cluster config:
	{Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:50:11.871396  545672 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:50:11.873069  545672 out.go:177] * Starting "kubernetes-upgrade-943444" primary control-plane node in "kubernetes-upgrade-943444" cluster
	I0414 11:50:11.874360  545672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 11:50:11.874409  545672 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 11:50:11.874421  545672 cache.go:56] Caching tarball of preloaded images
	I0414 11:50:11.874509  545672 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 11:50:11.874520  545672 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 11:50:11.874627  545672 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/config.json ...
	I0414 11:50:11.874652  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/config.json: {Name:mk5c21c0f9faff1323233a1c5f2b8ccefa76d690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:50:11.874816  545672 start.go:360] acquireMachinesLock for kubernetes-upgrade-943444: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 11:51:17.464430  545672 start.go:364] duration metric: took 1m5.589556802s to acquireMachinesLock for "kubernetes-upgrade-943444"
	I0414 11:51:17.464520  545672 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 11:51:17.464666  545672 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 11:51:17.466837  545672 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 11:51:17.467180  545672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:51:17.467259  545672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:51:17.484594  545672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0414 11:51:17.485137  545672 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:51:17.485711  545672 main.go:141] libmachine: Using API Version  1
	I0414 11:51:17.485736  545672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:51:17.486079  545672 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:51:17.486314  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:51:17.486514  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:17.486676  545672 start.go:159] libmachine.API.Create for "kubernetes-upgrade-943444" (driver="kvm2")
	I0414 11:51:17.486718  545672 client.go:168] LocalClient.Create starting
	I0414 11:51:17.486767  545672 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem
	I0414 11:51:17.486802  545672 main.go:141] libmachine: Decoding PEM data...
	I0414 11:51:17.486825  545672 main.go:141] libmachine: Parsing certificate...
	I0414 11:51:17.486891  545672 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem
	I0414 11:51:17.486910  545672 main.go:141] libmachine: Decoding PEM data...
	I0414 11:51:17.486925  545672 main.go:141] libmachine: Parsing certificate...
	I0414 11:51:17.486953  545672 main.go:141] libmachine: Running pre-create checks...
	I0414 11:51:17.486971  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .PreCreateCheck
	I0414 11:51:17.487398  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetConfigRaw
	I0414 11:51:17.487890  545672 main.go:141] libmachine: Creating machine...
	I0414 11:51:17.487909  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .Create
	I0414 11:51:17.488163  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) creating KVM machine...
	I0414 11:51:17.488200  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) creating network...
	I0414 11:51:17.489647  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found existing default KVM network
	I0414 11:51:17.490747  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:17.490586  546439 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000113180}
	I0414 11:51:17.490772  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | created network xml: 
	I0414 11:51:17.490788  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | <network>
	I0414 11:51:17.490801  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |   <name>mk-kubernetes-upgrade-943444</name>
	I0414 11:51:17.490814  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |   <dns enable='no'/>
	I0414 11:51:17.490825  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |   
	I0414 11:51:17.490851  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 11:51:17.490861  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |     <dhcp>
	I0414 11:51:17.490871  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 11:51:17.490882  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |     </dhcp>
	I0414 11:51:17.490894  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |   </ip>
	I0414 11:51:17.490910  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG |   
	I0414 11:51:17.490921  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | </network>
	I0414 11:51:17.490928  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | 
	I0414 11:51:17.496756  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | trying to create private KVM network mk-kubernetes-upgrade-943444 192.168.39.0/24...
	I0414 11:51:17.573176  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | private KVM network mk-kubernetes-upgrade-943444 192.168.39.0/24 created
	I0414 11:51:17.573244  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting up store path in /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444 ...
	I0414 11:51:17.573274  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) building disk image from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 11:51:17.573292  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:17.573235  546439 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:51:17.573482  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Downloading /home/jenkins/minikube-integration/20534-503273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 11:51:17.896219  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:17.896083  546439 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa...
	I0414 11:51:18.162214  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:18.162081  546439 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/kubernetes-upgrade-943444.rawdisk...
	I0414 11:51:18.162257  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | Writing magic tar header
	I0414 11:51:18.162276  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | Writing SSH key tar header
	I0414 11:51:18.162290  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:18.162234  546439 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444 ...
	I0414 11:51:18.162402  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444
	I0414 11:51:18.162433  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines
	I0414 11:51:18.162453  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444 (perms=drwx------)
	I0414 11:51:18.162476  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines (perms=drwxr-xr-x)
	I0414 11:51:18.172543  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:51:18.172603  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273
	I0414 11:51:18.172615  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube (perms=drwxr-xr-x)
	I0414 11:51:18.172657  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 11:51:18.172704  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting executable bit set on /home/jenkins/minikube-integration/20534-503273 (perms=drwxrwxr-x)
	I0414 11:51:18.172720  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home/jenkins
	I0414 11:51:18.172732  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | checking permissions on dir: /home
	I0414 11:51:18.172743  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | skipping /home - not owner
	I0414 11:51:18.172759  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 11:51:18.172776  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 11:51:18.172787  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) creating domain...
	I0414 11:51:18.173949  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) define libvirt domain using xml: 
	I0414 11:51:18.173970  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) <domain type='kvm'>
	I0414 11:51:18.174004  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <name>kubernetes-upgrade-943444</name>
	I0414 11:51:18.174044  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <memory unit='MiB'>2200</memory>
	I0414 11:51:18.174059  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <vcpu>2</vcpu>
	I0414 11:51:18.174072  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <features>
	I0414 11:51:18.174083  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <acpi/>
	I0414 11:51:18.174092  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <apic/>
	I0414 11:51:18.174101  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <pae/>
	I0414 11:51:18.174110  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     
	I0414 11:51:18.174118  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   </features>
	I0414 11:51:18.174128  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <cpu mode='host-passthrough'>
	I0414 11:51:18.174139  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   
	I0414 11:51:18.174149  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   </cpu>
	I0414 11:51:18.174156  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <os>
	I0414 11:51:18.174175  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <type>hvm</type>
	I0414 11:51:18.174186  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <boot dev='cdrom'/>
	I0414 11:51:18.174193  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <boot dev='hd'/>
	I0414 11:51:18.174201  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <bootmenu enable='no'/>
	I0414 11:51:18.174208  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   </os>
	I0414 11:51:18.174222  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   <devices>
	I0414 11:51:18.174234  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <disk type='file' device='cdrom'>
	I0414 11:51:18.174255  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/boot2docker.iso'/>
	I0414 11:51:18.174272  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <target dev='hdc' bus='scsi'/>
	I0414 11:51:18.174296  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <readonly/>
	I0414 11:51:18.174307  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </disk>
	I0414 11:51:18.174317  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <disk type='file' device='disk'>
	I0414 11:51:18.174331  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 11:51:18.174391  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/kubernetes-upgrade-943444.rawdisk'/>
	I0414 11:51:18.174416  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <target dev='hda' bus='virtio'/>
	I0414 11:51:18.174432  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </disk>
	I0414 11:51:18.174444  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <interface type='network'>
	I0414 11:51:18.174459  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <source network='mk-kubernetes-upgrade-943444'/>
	I0414 11:51:18.174470  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <model type='virtio'/>
	I0414 11:51:18.174516  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </interface>
	I0414 11:51:18.174538  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <interface type='network'>
	I0414 11:51:18.174549  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <source network='default'/>
	I0414 11:51:18.174560  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <model type='virtio'/>
	I0414 11:51:18.174570  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </interface>
	I0414 11:51:18.174580  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <serial type='pty'>
	I0414 11:51:18.174592  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <target port='0'/>
	I0414 11:51:18.174601  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </serial>
	I0414 11:51:18.174627  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <console type='pty'>
	I0414 11:51:18.174645  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <target type='serial' port='0'/>
	I0414 11:51:18.174658  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </console>
	I0414 11:51:18.174666  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     <rng model='virtio'>
	I0414 11:51:18.174676  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)       <backend model='random'>/dev/random</backend>
	I0414 11:51:18.174683  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     </rng>
	I0414 11:51:18.174691  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     
	I0414 11:51:18.174698  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)     
	I0414 11:51:18.174706  545672 main.go:141] libmachine: (kubernetes-upgrade-943444)   </devices>
	I0414 11:51:18.174722  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) </domain>
	I0414 11:51:18.174736  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) 
	I0414 11:51:18.257403  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:d5:6b:92 in network default
	I0414 11:51:18.258280  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:18.258319  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) starting domain...
	I0414 11:51:18.258332  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) ensuring networks are active...
	I0414 11:51:18.259447  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Ensuring network default is active
	I0414 11:51:18.259841  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Ensuring network mk-kubernetes-upgrade-943444 is active
	I0414 11:51:18.260613  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) getting domain XML...
	I0414 11:51:18.261567  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) creating domain...
	I0414 11:51:19.977970  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) waiting for IP...
	I0414 11:51:19.981264  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:19.982234  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:19.982257  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:19.982130  546439 retry.go:31] will retry after 300.939761ms: waiting for domain to come up
	I0414 11:51:20.284974  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:20.285648  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:20.285675  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:20.285590  546439 retry.go:31] will retry after 366.606744ms: waiting for domain to come up
	I0414 11:51:20.654015  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:20.654677  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:20.654706  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:20.654660  546439 retry.go:31] will retry after 316.668382ms: waiting for domain to come up
	I0414 11:51:20.973377  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:20.974031  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:20.974065  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:20.973985  546439 retry.go:31] will retry after 499.058032ms: waiting for domain to come up
	I0414 11:51:21.474977  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:21.475544  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:21.475573  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:21.475511  546439 retry.go:31] will retry after 595.513518ms: waiting for domain to come up
	I0414 11:51:22.072721  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:22.073371  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:22.073406  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:22.073270  546439 retry.go:31] will retry after 791.029806ms: waiting for domain to come up
	I0414 11:51:22.870078  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:22.870797  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:22.870826  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:22.870775  546439 retry.go:31] will retry after 838.477125ms: waiting for domain to come up
	I0414 11:51:23.710827  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:23.711409  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:23.711496  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:23.711250  546439 retry.go:31] will retry after 1.241069623s: waiting for domain to come up
	I0414 11:51:24.954881  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:24.955320  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:24.955396  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:24.955321  546439 retry.go:31] will retry after 1.639948236s: waiting for domain to come up
	I0414 11:51:26.597296  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:26.597847  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:26.597873  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:26.597803  546439 retry.go:31] will retry after 1.699443351s: waiting for domain to come up
	I0414 11:51:28.299051  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:28.299586  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:28.299610  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:28.299546  546439 retry.go:31] will retry after 2.749650186s: waiting for domain to come up
	I0414 11:51:31.052478  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:31.053075  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:31.053118  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:31.053035  546439 retry.go:31] will retry after 2.627525528s: waiting for domain to come up
	I0414 11:51:33.682501  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:33.682998  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:33.683028  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:33.682971  546439 retry.go:31] will retry after 3.361428677s: waiting for domain to come up
	I0414 11:51:37.048594  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:37.049065  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find current IP address of domain kubernetes-upgrade-943444 in network mk-kubernetes-upgrade-943444
	I0414 11:51:37.049094  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | I0414 11:51:37.049023  546439 retry.go:31] will retry after 3.444266085s: waiting for domain to come up
	I0414 11:51:40.496941  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.497511  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has current primary IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.497536  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) found domain IP: 192.168.39.2
	I0414 11:51:40.497545  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) reserving static IP address...
	I0414 11:51:40.497959  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-943444", mac: "52:54:00:02:e3:cb", ip: "192.168.39.2"} in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.583266  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) reserved static IP address 192.168.39.2 for domain kubernetes-upgrade-943444
	I0414 11:51:40.583365  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) waiting for SSH...
	I0414 11:51:40.583380  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | Getting to WaitForSSH function...
	I0414 11:51:40.587902  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.588395  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:40.588427  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.588566  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | Using SSH client type: external
	I0414 11:51:40.588590  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa (-rw-------)
	I0414 11:51:40.588651  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 11:51:40.588673  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | About to run SSH command:
	I0414 11:51:40.588688  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | exit 0
	I0414 11:51:40.711570  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | SSH cmd err, output: <nil>: 
	I0414 11:51:40.711857  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) KVM machine creation complete
	I0414 11:51:40.712239  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetConfigRaw
	I0414 11:51:40.712775  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:40.713022  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:40.713233  545672 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 11:51:40.713248  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetState
	I0414 11:51:40.714499  545672 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 11:51:40.714512  545672 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 11:51:40.714517  545672 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 11:51:40.714522  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:40.716690  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.717056  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:40.717088  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.717212  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:40.717400  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:40.717548  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:40.717679  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:40.717841  545672 main.go:141] libmachine: Using SSH client type: native
	I0414 11:51:40.718083  545672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:51:40.718096  545672 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 11:51:40.814631  545672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 11:51:40.814659  545672 main.go:141] libmachine: Detecting the provisioner...
	I0414 11:51:40.814668  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:40.817572  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.817931  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:40.817986  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.818126  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:40.818337  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:40.818517  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:40.818705  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:40.818915  545672 main.go:141] libmachine: Using SSH client type: native
	I0414 11:51:40.819213  545672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:51:40.819227  545672 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 11:51:40.915869  545672 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 11:51:40.915975  545672 main.go:141] libmachine: found compatible host: buildroot
	I0414 11:51:40.915988  545672 main.go:141] libmachine: Provisioning with buildroot...
	I0414 11:51:40.915999  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:51:40.916237  545672 buildroot.go:166] provisioning hostname "kubernetes-upgrade-943444"
	I0414 11:51:40.916270  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:51:40.916499  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:40.919564  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.919893  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:40.919921  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:40.920101  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:40.920298  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:40.920473  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:40.920623  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:40.920805  545672 main.go:141] libmachine: Using SSH client type: native
	I0414 11:51:40.921052  545672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:51:40.921066  545672 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-943444 && echo "kubernetes-upgrade-943444" | sudo tee /etc/hostname
	I0414 11:51:41.033143  545672 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-943444
	
	I0414 11:51:41.033179  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.036240  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.036582  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.036606  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.036844  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.037083  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.037240  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.037384  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.037521  545672 main.go:141] libmachine: Using SSH client type: native
	I0414 11:51:41.037772  545672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:51:41.037791  545672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-943444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-943444/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-943444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 11:51:41.143559  545672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 11:51:41.143590  545672 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 11:51:41.143609  545672 buildroot.go:174] setting up certificates
	I0414 11:51:41.143623  545672 provision.go:84] configureAuth start
	I0414 11:51:41.143635  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:51:41.143974  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:51:41.146615  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.146955  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.146997  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.147137  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.149292  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.149669  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.149699  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.149942  545672 provision.go:143] copyHostCerts
	I0414 11:51:41.150016  545672 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 11:51:41.150039  545672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 11:51:41.150096  545672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 11:51:41.150185  545672 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 11:51:41.150193  545672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 11:51:41.150212  545672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 11:51:41.150295  545672 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 11:51:41.150304  545672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 11:51:41.150321  545672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 11:51:41.150368  545672 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-943444 san=[127.0.0.1 192.168.39.2 kubernetes-upgrade-943444 localhost minikube]
	I0414 11:51:41.198418  545672 provision.go:177] copyRemoteCerts
	I0414 11:51:41.198502  545672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 11:51:41.198531  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.201035  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.201395  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.201419  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.201617  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.201772  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.201959  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.202112  545672 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:51:41.281318  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 11:51:41.303946  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 11:51:41.328760  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 11:51:41.350348  545672 provision.go:87] duration metric: took 206.70842ms to configureAuth
	I0414 11:51:41.350382  545672 buildroot.go:189] setting minikube options for container-runtime
	I0414 11:51:41.350569  545672 config.go:182] Loaded profile config "kubernetes-upgrade-943444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 11:51:41.350666  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.353425  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.353777  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.353826  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.354005  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.354197  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.354372  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.354551  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.354786  545672 main.go:141] libmachine: Using SSH client type: native
	I0414 11:51:41.355067  545672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:51:41.355104  545672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 11:51:41.569247  545672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 11:51:41.569284  545672 main.go:141] libmachine: Checking connection to Docker...
	I0414 11:51:41.569295  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetURL
	I0414 11:51:41.570915  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | using libvirt version 6000000
	I0414 11:51:41.573549  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.573987  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.574018  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.574184  545672 main.go:141] libmachine: Docker is up and running!
	I0414 11:51:41.574200  545672 main.go:141] libmachine: Reticulating splines...
	I0414 11:51:41.574209  545672 client.go:171] duration metric: took 24.087479797s to LocalClient.Create
	I0414 11:51:41.574242  545672 start.go:167] duration metric: took 24.08756788s to libmachine.API.Create "kubernetes-upgrade-943444"
	I0414 11:51:41.574254  545672 start.go:293] postStartSetup for "kubernetes-upgrade-943444" (driver="kvm2")
	I0414 11:51:41.574266  545672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 11:51:41.574292  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:41.574557  545672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 11:51:41.574610  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.576933  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.577298  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.577321  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.577454  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.577634  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.577791  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.577923  545672 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:51:41.657320  545672 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 11:51:41.661365  545672 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 11:51:41.661391  545672 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 11:51:41.661454  545672 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 11:51:41.661531  545672 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 11:51:41.661620  545672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 11:51:41.672295  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:51:41.695561  545672 start.go:296] duration metric: took 121.291629ms for postStartSetup
	I0414 11:51:41.695620  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetConfigRaw
	I0414 11:51:41.696290  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:51:41.699022  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.699360  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.699390  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.699675  545672 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/config.json ...
	I0414 11:51:41.699858  545672 start.go:128] duration metric: took 24.235179981s to createHost
	I0414 11:51:41.699882  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.702550  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.702937  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.702967  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.703089  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.703305  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.703500  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.703654  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.703852  545672 main.go:141] libmachine: Using SSH client type: native
	I0414 11:51:41.704090  545672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:51:41.704102  545672 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 11:51:41.799885  545672 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744631501.769815594
	
	I0414 11:51:41.799915  545672 fix.go:216] guest clock: 1744631501.769815594
	I0414 11:51:41.799923  545672 fix.go:229] Guest: 2025-04-14 11:51:41.769815594 +0000 UTC Remote: 2025-04-14 11:51:41.699870597 +0000 UTC m=+89.935753082 (delta=69.944997ms)
	I0414 11:51:41.799971  545672 fix.go:200] guest clock delta is within tolerance: 69.944997ms
	I0414 11:51:41.799984  545672 start.go:83] releasing machines lock for "kubernetes-upgrade-943444", held for 24.335508512s
	I0414 11:51:41.800018  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:41.800329  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:51:41.803461  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.803834  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.803864  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.804047  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:41.804597  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:41.804777  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:51:41.804895  545672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 11:51:41.804954  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.805025  545672 ssh_runner.go:195] Run: cat /version.json
	I0414 11:51:41.805053  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:51:41.807798  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.807875  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.808160  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.808200  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.808232  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:41.808250  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:41.808478  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.808579  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:51:41.808668  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.808748  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:51:41.808851  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.808901  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:51:41.808993  545672 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:51:41.809047  545672 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:51:41.884456  545672 ssh_runner.go:195] Run: systemctl --version
	I0414 11:51:41.913792  545672 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 11:51:42.073826  545672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 11:51:42.079429  545672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 11:51:42.079520  545672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 11:51:42.095151  545672 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 11:51:42.095195  545672 start.go:495] detecting cgroup driver to use...
	I0414 11:51:42.095271  545672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 11:51:42.111362  545672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 11:51:42.128099  545672 docker.go:217] disabling cri-docker service (if available) ...
	I0414 11:51:42.128184  545672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 11:51:42.142338  545672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 11:51:42.155679  545672 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 11:51:42.267371  545672 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 11:51:42.409872  545672 docker.go:233] disabling docker service ...
	I0414 11:51:42.409953  545672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 11:51:42.424545  545672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 11:51:42.438136  545672 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 11:51:42.594029  545672 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 11:51:42.717634  545672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 11:51:42.731040  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 11:51:42.748895  545672 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 11:51:42.748968  545672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:51:42.759164  545672 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 11:51:42.759233  545672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:51:42.769326  545672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:51:42.780042  545672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:51:42.790812  545672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 11:51:42.801793  545672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 11:51:42.811484  545672 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 11:51:42.811550  545672 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 11:51:42.823671  545672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 11:51:42.833110  545672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:51:42.953020  545672 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 11:51:43.049076  545672 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 11:51:43.049149  545672 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 11:51:43.054448  545672 start.go:563] Will wait 60s for crictl version
	I0414 11:51:43.054553  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:43.058298  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 11:51:43.107053  545672 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 11:51:43.107159  545672 ssh_runner.go:195] Run: crio --version
	I0414 11:51:43.137526  545672 ssh_runner.go:195] Run: crio --version
	I0414 11:51:43.173571  545672 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 11:51:43.174884  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:51:43.177764  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:43.178130  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:51:33 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:51:43.178162  545672 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:51:43.178411  545672 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 11:51:43.182619  545672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 11:51:43.196863  545672 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 11:51:43.197022  545672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 11:51:43.197075  545672 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:51:43.229839  545672 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 11:51:43.229935  545672 ssh_runner.go:195] Run: which lz4
	I0414 11:51:43.234094  545672 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 11:51:43.238540  545672 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 11:51:43.238583  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 11:51:44.724367  545672 crio.go:462] duration metric: took 1.490301706s to copy over tarball
	I0414 11:51:44.724451  545672 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 11:51:47.304796  545672 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.580316012s)
	I0414 11:51:47.304835  545672 crio.go:469] duration metric: took 2.580425793s to extract the tarball
	I0414 11:51:47.304848  545672 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 11:51:47.347239  545672 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:51:47.392481  545672 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 11:51:47.392511  545672 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 11:51:47.392617  545672 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:51:47.392708  545672 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.393321  545672 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.393431  545672 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 11:51:47.393463  545672 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.393524  545672 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:47.392621  545672 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.392620  545672 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:47.395240  545672 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:47.395693  545672 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 11:51:47.396074  545672 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.396369  545672 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.396412  545672 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.396458  545672 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:47.396478  545672 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.396625  545672 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:51:47.546750  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.549044  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.557859  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.560450  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:47.562681  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.578950  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 11:51:47.587191  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:47.628951  545672 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 11:51:47.629010  545672 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.629067  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.676767  545672 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 11:51:47.676824  545672 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.676887  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.709003  545672 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 11:51:47.709059  545672 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.709123  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.710080  545672 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 11:51:47.710118  545672 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:47.710129  545672 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 11:51:47.710161  545672 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.710214  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.710164  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.717994  545672 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 11:51:47.718038  545672 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:47.718060  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.718078  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.718099  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.718146  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.718184  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.718111  545672 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 11:51:47.718227  545672 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 11:51:47.718258  545672 ssh_runner.go:195] Run: which crictl
	I0414 11:51:47.720571  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:47.731840  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:47.822062  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.833195  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.833275  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.855366  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 11:51:47.855485  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:47.855506  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.865402  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:47.977219  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 11:51:47.977308  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 11:51:47.992950  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 11:51:47.992950  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 11:51:47.993044  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 11:51:47.993171  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 11:51:48.021078  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 11:51:48.089967  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 11:51:48.105235  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 11:51:48.136362  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 11:51:48.137929  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 11:51:48.150927  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 11:51:48.151008  545672 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 11:51:48.151997  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 11:51:48.187573  545672 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 11:51:49.057207  545672 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:51:49.195804  545672 cache_images.go:92] duration metric: took 1.803269972s to LoadCachedImages
	W0414 11:51:49.195916  545672 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0414 11:51:49.195933  545672 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.20.0 crio true true} ...
	I0414 11:51:49.196052  545672 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-943444 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 11:51:49.196150  545672 ssh_runner.go:195] Run: crio config
	I0414 11:51:49.246851  545672 cni.go:84] Creating CNI manager for ""
	I0414 11:51:49.246887  545672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:51:49.246902  545672 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 11:51:49.246926  545672 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-943444 NodeName:kubernetes-upgrade-943444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 11:51:49.247066  545672 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-943444"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 11:51:49.247144  545672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 11:51:49.257351  545672 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 11:51:49.257441  545672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 11:51:49.267792  545672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0414 11:51:49.285464  545672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 11:51:49.303220  545672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 11:51:49.319947  545672 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I0414 11:51:49.323554  545672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 11:51:49.335743  545672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:51:49.460125  545672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:51:49.475959  545672 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444 for IP: 192.168.39.2
	I0414 11:51:49.475994  545672 certs.go:194] generating shared ca certs ...
	I0414 11:51:49.476020  545672 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:49.476243  545672 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 11:51:49.476300  545672 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 11:51:49.476313  545672 certs.go:256] generating profile certs ...
	I0414 11:51:49.476395  545672 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.key
	I0414 11:51:49.476416  545672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.crt with IP's: []
	I0414 11:51:49.722629  545672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.crt ...
	I0414 11:51:49.722666  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.crt: {Name:mke5c74f60ba2dcd4ebdc6f0d4743d0a268c0570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:49.722843  545672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.key ...
	I0414 11:51:49.722871  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.key: {Name:mk3669a53cf6b10f1d142b70bf0070374b24770b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:49.723017  545672 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key.d1b7d982
	I0414 11:51:49.723048  545672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt.d1b7d982 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I0414 11:51:50.125758  545672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt.d1b7d982 ...
	I0414 11:51:50.125794  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt.d1b7d982: {Name:mkfa3a6cf18443c22a82e273a72b49d1f38ccc60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:50.125973  545672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key.d1b7d982 ...
	I0414 11:51:50.125988  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key.d1b7d982: {Name:mk0f4b280d6dbf45839d1879a91d93255d166c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:50.126060  545672 certs.go:381] copying /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt.d1b7d982 -> /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt
	I0414 11:51:50.126129  545672 certs.go:385] copying /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key.d1b7d982 -> /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key
	I0414 11:51:50.126183  545672 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.key
	I0414 11:51:50.126198  545672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.crt with IP's: []
	I0414 11:51:50.217175  545672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.crt ...
	I0414 11:51:50.217226  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.crt: {Name:mkfcf55a1f6e5c0119794d14f5a9c2bd6c22414b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:50.217460  545672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.key ...
	I0414 11:51:50.217487  545672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.key: {Name:mk84f8f326d4025c3d19f6e3a577b8ec00b32f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:51:50.217754  545672 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 11:51:50.217816  545672 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 11:51:50.217831  545672 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 11:51:50.217894  545672 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 11:51:50.217929  545672 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 11:51:50.217972  545672 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 11:51:50.218029  545672 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:51:50.218613  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 11:51:50.246875  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 11:51:50.275676  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 11:51:50.309565  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 11:51:50.352590  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 11:51:50.384009  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 11:51:50.414802  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 11:51:50.444808  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 11:51:50.479141  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 11:51:50.509376  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 11:51:50.535552  545672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 11:51:50.559569  545672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 11:51:50.578247  545672 ssh_runner.go:195] Run: openssl version
	I0414 11:51:50.584222  545672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 11:51:50.595712  545672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 11:51:50.600261  545672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 11:51:50.600341  545672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 11:51:50.607397  545672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 11:51:50.623071  545672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 11:51:50.634530  545672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:51:50.639100  545672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:51:50.639167  545672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:51:50.644868  545672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 11:51:50.657188  545672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 11:51:50.669537  545672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 11:51:50.673876  545672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 11:51:50.673956  545672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 11:51:50.681911  545672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 11:51:50.694324  545672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 11:51:50.699917  545672 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 11:51:50.699983  545672 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:51:50.700066  545672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 11:51:50.700116  545672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 11:51:50.738598  545672 cri.go:89] found id: ""
	I0414 11:51:50.738693  545672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 11:51:50.750232  545672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 11:51:50.760751  545672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 11:51:50.770806  545672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 11:51:50.770833  545672 kubeadm.go:157] found existing configuration files:
	
	I0414 11:51:50.770903  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 11:51:50.783726  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 11:51:50.783813  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 11:51:50.796020  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 11:51:50.806847  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 11:51:50.806944  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 11:51:50.817758  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 11:51:50.828534  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 11:51:50.828618  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 11:51:50.838157  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 11:51:50.847427  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 11:51:50.847498  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 11:51:50.857456  545672 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 11:51:51.142861  545672 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 11:53:49.376792  545672 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 11:53:49.376932  545672 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 11:53:49.378239  545672 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 11:53:49.378352  545672 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 11:53:49.378481  545672 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 11:53:49.378637  545672 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 11:53:49.378771  545672 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 11:53:49.378858  545672 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 11:53:49.489054  545672 out.go:235]   - Generating certificates and keys ...
	I0414 11:53:49.489173  545672 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 11:53:49.489289  545672 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 11:53:49.489393  545672 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 11:53:49.489473  545672 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 11:53:49.489561  545672 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 11:53:49.489632  545672 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 11:53:49.489709  545672 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 11:53:49.489888  545672 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-943444 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0414 11:53:49.489956  545672 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 11:53:49.490117  545672 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-943444 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0414 11:53:49.490252  545672 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 11:53:49.490365  545672 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 11:53:49.490441  545672 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 11:53:49.490532  545672 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 11:53:49.490609  545672 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 11:53:49.490657  545672 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 11:53:49.490721  545672 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 11:53:49.490806  545672 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 11:53:49.490962  545672 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 11:53:49.491094  545672 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 11:53:49.491149  545672 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 11:53:49.491248  545672 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 11:53:49.737757  545672 out.go:235]   - Booting up control plane ...
	I0414 11:53:49.737912  545672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 11:53:49.738071  545672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 11:53:49.738182  545672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 11:53:49.738299  545672 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 11:53:49.738526  545672 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 11:53:49.738609  545672 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 11:53:49.738708  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:53:49.739001  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:53:49.739125  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:53:49.739417  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:53:49.739540  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:53:49.739833  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:53:49.739922  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:53:49.740104  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:53:49.740174  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:53:49.740384  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:53:49.740395  545672 kubeadm.go:310] 
	I0414 11:53:49.740455  545672 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 11:53:49.740508  545672 kubeadm.go:310] 		timed out waiting for the condition
	I0414 11:53:49.740518  545672 kubeadm.go:310] 
	I0414 11:53:49.740570  545672 kubeadm.go:310] 	This error is likely caused by:
	I0414 11:53:49.740630  545672 kubeadm.go:310] 		- The kubelet is not running
	I0414 11:53:49.740770  545672 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 11:53:49.740784  545672 kubeadm.go:310] 
	I0414 11:53:49.740938  545672 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 11:53:49.740982  545672 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 11:53:49.741035  545672 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 11:53:49.741044  545672 kubeadm.go:310] 
	I0414 11:53:49.741179  545672 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 11:53:49.741290  545672 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 11:53:49.741303  545672 kubeadm.go:310] 
	I0414 11:53:49.741446  545672 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 11:53:49.741568  545672 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 11:53:49.741687  545672 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 11:53:49.741799  545672 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 11:53:49.741867  545672 kubeadm.go:310] 
	W0414 11:53:49.741987  545672 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-943444 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-943444 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-943444 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-943444 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 11:53:49.742041  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 11:53:50.206853  545672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:53:50.224611  545672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 11:53:50.235272  545672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 11:53:50.235307  545672 kubeadm.go:157] found existing configuration files:
	
	I0414 11:53:50.235363  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 11:53:50.244541  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 11:53:50.244606  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 11:53:50.256501  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 11:53:50.266000  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 11:53:50.266074  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 11:53:50.276439  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 11:53:50.287869  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 11:53:50.287932  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 11:53:50.299936  545672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 11:53:50.311806  545672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 11:53:50.311882  545672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 11:53:50.322108  545672 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 11:53:50.533607  545672 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 11:55:46.484158  545672 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 11:55:46.484276  545672 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 11:55:46.486013  545672 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 11:55:46.486112  545672 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 11:55:46.486238  545672 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 11:55:46.486361  545672 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 11:55:46.486493  545672 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 11:55:46.486595  545672 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 11:55:46.489098  545672 out.go:235]   - Generating certificates and keys ...
	I0414 11:55:46.489212  545672 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 11:55:46.489307  545672 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 11:55:46.489433  545672 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 11:55:46.489492  545672 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 11:55:46.489569  545672 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 11:55:46.489629  545672 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 11:55:46.489716  545672 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 11:55:46.489812  545672 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 11:55:46.489909  545672 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 11:55:46.490010  545672 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 11:55:46.490042  545672 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 11:55:46.490093  545672 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 11:55:46.490134  545672 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 11:55:46.490176  545672 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 11:55:46.490272  545672 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 11:55:46.490359  545672 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 11:55:46.490564  545672 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 11:55:46.490703  545672 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 11:55:46.490770  545672 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 11:55:46.490871  545672 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 11:55:46.492343  545672 out.go:235]   - Booting up control plane ...
	I0414 11:55:46.492433  545672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 11:55:46.492516  545672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 11:55:46.492601  545672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 11:55:46.492739  545672 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 11:55:46.492993  545672 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 11:55:46.493056  545672 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 11:55:46.493147  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:55:46.493367  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:55:46.493490  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:55:46.493720  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:55:46.493802  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:55:46.494062  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:55:46.494165  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:55:46.494431  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:55:46.494524  545672 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 11:55:46.494701  545672 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 11:55:46.494717  545672 kubeadm.go:310] 
	I0414 11:55:46.494764  545672 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 11:55:46.494804  545672 kubeadm.go:310] 		timed out waiting for the condition
	I0414 11:55:46.494811  545672 kubeadm.go:310] 
	I0414 11:55:46.494840  545672 kubeadm.go:310] 	This error is likely caused by:
	I0414 11:55:46.494903  545672 kubeadm.go:310] 		- The kubelet is not running
	I0414 11:55:46.495044  545672 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 11:55:46.495069  545672 kubeadm.go:310] 
	I0414 11:55:46.495214  545672 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 11:55:46.495257  545672 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 11:55:46.495323  545672 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 11:55:46.495333  545672 kubeadm.go:310] 
	I0414 11:55:46.495446  545672 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 11:55:46.495522  545672 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 11:55:46.495528  545672 kubeadm.go:310] 
	I0414 11:55:46.495626  545672 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 11:55:46.495738  545672 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 11:55:46.495849  545672 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 11:55:46.495939  545672 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 11:55:46.495985  545672 kubeadm.go:310] 
	I0414 11:55:46.496020  545672 kubeadm.go:394] duration metric: took 3m55.796041208s to StartCluster
	I0414 11:55:46.496074  545672 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 11:55:46.496146  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 11:55:46.544088  545672 cri.go:89] found id: ""
	I0414 11:55:46.544123  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.544136  545672 logs.go:284] No container was found matching "kube-apiserver"
	I0414 11:55:46.544143  545672 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 11:55:46.544224  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 11:55:46.580817  545672 cri.go:89] found id: ""
	I0414 11:55:46.580849  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.580861  545672 logs.go:284] No container was found matching "etcd"
	I0414 11:55:46.580869  545672 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 11:55:46.580934  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 11:55:46.616242  545672 cri.go:89] found id: ""
	I0414 11:55:46.616274  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.616284  545672 logs.go:284] No container was found matching "coredns"
	I0414 11:55:46.616291  545672 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 11:55:46.616352  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 11:55:46.658865  545672 cri.go:89] found id: ""
	I0414 11:55:46.658894  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.658905  545672 logs.go:284] No container was found matching "kube-scheduler"
	I0414 11:55:46.658914  545672 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 11:55:46.658985  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 11:55:46.698469  545672 cri.go:89] found id: ""
	I0414 11:55:46.698502  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.698515  545672 logs.go:284] No container was found matching "kube-proxy"
	I0414 11:55:46.698523  545672 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 11:55:46.698618  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 11:55:46.733444  545672 cri.go:89] found id: ""
	I0414 11:55:46.733475  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.733485  545672 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 11:55:46.733493  545672 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 11:55:46.733568  545672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 11:55:46.768570  545672 cri.go:89] found id: ""
	I0414 11:55:46.768609  545672 logs.go:282] 0 containers: []
	W0414 11:55:46.768620  545672 logs.go:284] No container was found matching "kindnet"
	I0414 11:55:46.768645  545672 logs.go:123] Gathering logs for kubelet ...
	I0414 11:55:46.768666  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 11:55:46.827819  545672 logs.go:123] Gathering logs for dmesg ...
	I0414 11:55:46.827857  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 11:55:46.841168  545672 logs.go:123] Gathering logs for describe nodes ...
	I0414 11:55:46.841200  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 11:55:46.966215  545672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 11:55:46.966241  545672 logs.go:123] Gathering logs for CRI-O ...
	I0414 11:55:46.966255  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 11:55:47.086786  545672 logs.go:123] Gathering logs for container status ...
	I0414 11:55:47.086835  545672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 11:55:47.123814  545672 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 11:55:47.123887  545672 out.go:270] * 
	* 
	W0414 11:55:47.123968  545672 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 11:55:47.123989  545672 out.go:270] * 
	* 
	W0414 11:55:47.124888  545672 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 11:55:47.128054  545672 out.go:201] 
	W0414 11:55:47.129334  545672 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 11:55:47.129375  545672 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 11:55:47.129398  545672 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 11:55:47.130717  545672 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-943444
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-943444: (6.320971196s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-943444 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-943444 status --format={{.Host}}: exit status 7 (77.953752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.440799095s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-943444 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.024838ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-943444] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-943444
	    minikube start -p kubernetes-upgrade-943444 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9434442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-943444 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-943444 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (12.889663868s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-14 11:57:00.082840702 +0000 UTC m=+3950.889016815
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-943444 -n kubernetes-upgrade-943444
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-943444 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-943444 logs -n 25: (1.211514644s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-223451                | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:52 UTC | 14 Apr 25 11:52 UTC |
	| start   | -p NoKubernetes-223451                | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:52 UTC | 14 Apr 25 11:53 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-930410             | running-upgrade-930410    | jenkins | v1.35.0 | 14 Apr 25 11:52 UTC | 14 Apr 25 11:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-515371             | stopped-upgrade-515371    | jenkins | v1.35.0 | 14 Apr 25 11:53 UTC | 14 Apr 25 11:53 UTC |
	| start   | -p pause-066593 --memory=2048         | pause-066593              | jenkins | v1.35.0 | 14 Apr 25 11:53 UTC | 14 Apr 25 11:54 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-223451 sudo           | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:53 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-223451                | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:53 UTC | 14 Apr 25 11:53 UTC |
	| start   | -p NoKubernetes-223451                | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:53 UTC | 14 Apr 25 11:54 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-223451 sudo           | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-223451                | NoKubernetes-223451       | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:54 UTC |
	| start   | -p force-systemd-flag-567758          | force-systemd-flag-567758 | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:54 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-066593                       | pause-066593              | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-930410             | running-upgrade-930410    | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:54 UTC |
	| start   | -p cert-expiration-623032             | cert-expiration-623032    | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:55 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-567758 ssh cat     | force-systemd-flag-567758 | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:54 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-567758          | force-systemd-flag-567758 | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:54 UTC |
	| start   | -p cert-options-494137                | cert-options-494137       | jenkins | v1.35.0 | 14 Apr 25 11:54 UTC | 14 Apr 25 11:55 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-494137 ssh               | cert-options-494137       | jenkins | v1.35.0 | 14 Apr 25 11:55 UTC | 14 Apr 25 11:55 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-494137 -- sudo        | cert-options-494137       | jenkins | v1.35.0 | 14 Apr 25 11:55 UTC | 14 Apr 25 11:55 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-494137                | cert-options-494137       | jenkins | v1.35.0 | 14 Apr 25 11:55 UTC | 14 Apr 25 11:55 UTC |
	| start   | -p auto-948178 --memory=3072          | auto-948178               | jenkins | v1.35.0 | 14 Apr 25 11:55 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-943444          | kubernetes-upgrade-943444 | jenkins | v1.35.0 | 14 Apr 25 11:55 UTC | 14 Apr 25 11:55 UTC |
	| start   | -p kubernetes-upgrade-943444          | kubernetes-upgrade-943444 | jenkins | v1.35.0 | 14 Apr 25 11:55 UTC | 14 Apr 25 11:56 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-943444          | kubernetes-upgrade-943444 | jenkins | v1.35.0 | 14 Apr 25 11:56 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-943444          | kubernetes-upgrade-943444 | jenkins | v1.35.0 | 14 Apr 25 11:56 UTC | 14 Apr 25 11:57 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:56:47
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:56:47.238778  551229 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:56:47.239173  551229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:56:47.239193  551229 out.go:358] Setting ErrFile to fd 2...
	I0414 11:56:47.239199  551229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:56:47.239654  551229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:56:47.240666  551229 out.go:352] Setting JSON to false
	I0414 11:56:47.241701  551229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20358,"bootTime":1744611449,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:56:47.241823  551229 start.go:139] virtualization: kvm guest
	I0414 11:56:47.243426  551229 out.go:177] * [kubernetes-upgrade-943444] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:56:47.244602  551229 notify.go:220] Checking for updates...
	I0414 11:56:47.244625  551229 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:56:47.245876  551229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:56:47.247014  551229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:56:47.248030  551229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:56:47.249169  551229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:56:47.250569  551229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:56:47.252397  551229 config.go:182] Loaded profile config "kubernetes-upgrade-943444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:56:47.252998  551229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:56:47.253082  551229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:56:47.271993  551229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I0414 11:56:47.272511  551229 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:56:47.273057  551229 main.go:141] libmachine: Using API Version  1
	I0414 11:56:47.273081  551229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:56:47.273695  551229 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:56:47.273919  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:47.274217  551229 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:56:47.274589  551229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:56:47.274649  551229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:56:47.290546  551229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I0414 11:56:47.291017  551229 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:56:47.291444  551229 main.go:141] libmachine: Using API Version  1
	I0414 11:56:47.291464  551229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:56:47.291849  551229 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:56:47.292028  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:47.328733  551229 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 11:56:47.329865  551229 start.go:297] selected driver: kvm2
	I0414 11:56:47.329878  551229 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:kubernetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:56:47.330000  551229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:56:47.330725  551229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:56:47.330839  551229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 11:56:47.346810  551229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 11:56:47.347224  551229 cni.go:84] Creating CNI manager for ""
	I0414 11:56:47.347273  551229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:56:47.347403  551229 start.go:340] cluster config:
	{Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:56:47.347555  551229 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:56:47.349196  551229 out.go:177] * Starting "kubernetes-upgrade-943444" primary control-plane node in "kubernetes-upgrade-943444" cluster
	I0414 11:56:47.350475  551229 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:56:47.350519  551229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 11:56:47.350543  551229 cache.go:56] Caching tarball of preloaded images
	I0414 11:56:47.350658  551229 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 11:56:47.350672  551229 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 11:56:47.350781  551229 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/config.json ...
	I0414 11:56:47.351004  551229 start.go:360] acquireMachinesLock for kubernetes-upgrade-943444: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 11:56:47.351061  551229 start.go:364] duration metric: took 32.846µs to acquireMachinesLock for "kubernetes-upgrade-943444"
	I0414 11:56:47.351077  551229 start.go:96] Skipping create...Using existing machine configuration
	I0414 11:56:47.351082  551229 fix.go:54] fixHost starting: 
	I0414 11:56:47.351449  551229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:56:47.351487  551229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:56:47.368726  551229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0414 11:56:47.369296  551229 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:56:47.369804  551229 main.go:141] libmachine: Using API Version  1
	I0414 11:56:47.369824  551229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:56:47.370247  551229 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:56:47.370458  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:47.370623  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetState
	I0414 11:56:47.372501  551229 fix.go:112] recreateIfNeeded on kubernetes-upgrade-943444: state=Running err=<nil>
	W0414 11:56:47.372528  551229 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 11:56:47.374228  551229 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-943444" VM ...
	I0414 11:56:43.250806  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:43.750626  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:44.251094  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:44.751402  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:45.250680  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:45.751333  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:46.250532  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:46.751071  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:47.250822  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:47.750555  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:44.702886  550493 pod_ready.go:103] pod "coredns-668d6bf9bc-9bh6l" in "kube-system" namespace has status "Ready":"False"
	I0414 11:56:47.201416  550493 pod_ready.go:103] pod "coredns-668d6bf9bc-9bh6l" in "kube-system" namespace has status "Ready":"False"
	I0414 11:56:47.375214  551229 machine.go:93] provisionDockerMachine start ...
	I0414 11:56:47.375236  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:47.375456  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:47.378045  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.378433  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:47.378460  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.378641  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:47.378780  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.378913  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.379064  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:47.379203  551229 main.go:141] libmachine: Using SSH client type: native
	I0414 11:56:47.379469  551229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:56:47.379481  551229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 11:56:47.491750  551229 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-943444
	
	I0414 11:56:47.491794  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:56:47.492062  551229 buildroot.go:166] provisioning hostname "kubernetes-upgrade-943444"
	I0414 11:56:47.492097  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:56:47.492292  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:47.494902  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.495312  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:47.495348  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.495456  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:47.495657  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.495874  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.496030  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:47.496221  551229 main.go:141] libmachine: Using SSH client type: native
	I0414 11:56:47.496456  551229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:56:47.496472  551229 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-943444 && echo "kubernetes-upgrade-943444" | sudo tee /etc/hostname
	I0414 11:56:47.621156  551229 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-943444
	
	I0414 11:56:47.621186  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:47.624061  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.624446  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:47.624479  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.624636  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:47.624873  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.625065  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.625232  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:47.625386  551229 main.go:141] libmachine: Using SSH client type: native
	I0414 11:56:47.625653  551229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:56:47.625676  551229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-943444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-943444/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-943444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 11:56:47.771595  551229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 11:56:47.771627  551229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 11:56:47.771652  551229 buildroot.go:174] setting up certificates
	I0414 11:56:47.771665  551229 provision.go:84] configureAuth start
	I0414 11:56:47.771675  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetMachineName
	I0414 11:56:47.771954  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:56:47.774530  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.774994  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:47.775053  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.775142  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:47.777591  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.777985  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:47.778017  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.778143  551229 provision.go:143] copyHostCerts
	I0414 11:56:47.778201  551229 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 11:56:47.778212  551229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 11:56:47.778316  551229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 11:56:47.778418  551229 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 11:56:47.778428  551229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 11:56:47.778447  551229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 11:56:47.778509  551229 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 11:56:47.778515  551229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 11:56:47.778542  551229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 11:56:47.778604  551229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-943444 san=[127.0.0.1 192.168.39.2 kubernetes-upgrade-943444 localhost minikube]
	I0414 11:56:47.866240  551229 provision.go:177] copyRemoteCerts
	I0414 11:56:47.866312  551229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 11:56:47.866339  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:47.869336  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.869734  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:47.869766  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:47.870111  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:47.870329  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:47.870496  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:47.870665  551229 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:56:47.962067  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 11:56:47.989536  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 11:56:48.015514  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 11:56:48.040367  551229 provision.go:87] duration metric: took 268.684017ms to configureAuth
	I0414 11:56:48.040406  551229 buildroot.go:189] setting minikube options for container-runtime
	I0414 11:56:48.040652  551229 config.go:182] Loaded profile config "kubernetes-upgrade-943444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:56:48.040734  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:48.043732  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:48.044055  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:48.044092  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:48.044232  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:48.044432  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:48.044600  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:48.044775  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:48.044938  551229 main.go:141] libmachine: Using SSH client type: native
	I0414 11:56:48.045163  551229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:56:48.045183  551229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 11:56:48.839378  551229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 11:56:48.839408  551229 machine.go:96] duration metric: took 1.46417952s to provisionDockerMachine
	I0414 11:56:48.839420  551229 start.go:293] postStartSetup for "kubernetes-upgrade-943444" (driver="kvm2")
	I0414 11:56:48.839432  551229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 11:56:48.839452  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:48.839809  551229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 11:56:48.839878  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:48.843376  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:48.843793  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:48.843825  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:48.844177  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:48.844410  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:48.844598  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:48.844785  551229 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:56:48.977903  551229 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 11:56:49.010839  551229 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 11:56:49.010880  551229 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 11:56:49.010958  551229 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 11:56:49.011056  551229 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 11:56:49.011154  551229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 11:56:49.044656  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:56:49.109769  551229 start.go:296] duration metric: took 270.334162ms for postStartSetup
	I0414 11:56:49.109848  551229 fix.go:56] duration metric: took 1.758765108s for fixHost
	I0414 11:56:49.109876  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:49.113220  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.113822  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:49.113881  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.114231  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:49.114477  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:49.114720  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:49.114932  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:49.115173  551229 main.go:141] libmachine: Using SSH client type: native
	I0414 11:56:49.115531  551229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 11:56:49.115555  551229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 11:56:49.297430  551229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744631809.283690170
	
	I0414 11:56:49.297458  551229 fix.go:216] guest clock: 1744631809.283690170
	I0414 11:56:49.297498  551229 fix.go:229] Guest: 2025-04-14 11:56:49.28369017 +0000 UTC Remote: 2025-04-14 11:56:49.109856078 +0000 UTC m=+1.910950610 (delta=173.834092ms)
	I0414 11:56:49.297530  551229 fix.go:200] guest clock delta is within tolerance: 173.834092ms
	I0414 11:56:49.297542  551229 start.go:83] releasing machines lock for "kubernetes-upgrade-943444", held for 1.94647079s
	I0414 11:56:49.297571  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:49.297862  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:56:49.301010  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.301399  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:49.301454  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.301641  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:49.302194  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:49.302376  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .DriverName
	I0414 11:56:49.302469  551229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 11:56:49.302526  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:49.302582  551229 ssh_runner.go:195] Run: cat /version.json
	I0414 11:56:49.302613  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHHostname
	I0414 11:56:49.305397  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.305455  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.305855  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:49.305888  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.305915  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:49.305933  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:49.305997  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:49.306140  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHPort
	I0414 11:56:49.306231  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:49.306354  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHKeyPath
	I0414 11:56:49.306446  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:49.306545  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetSSHUsername
	I0414 11:56:49.306621  551229 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:56:49.306670  551229 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/kubernetes-upgrade-943444/id_rsa Username:docker}
	I0414 11:56:49.475734  551229 ssh_runner.go:195] Run: systemctl --version
	I0414 11:56:49.483919  551229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 11:56:49.662880  551229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 11:56:49.668549  551229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 11:56:49.668658  551229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 11:56:49.681730  551229 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 11:56:49.681761  551229 start.go:495] detecting cgroup driver to use...
	I0414 11:56:49.681850  551229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 11:56:49.706679  551229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 11:56:49.724608  551229 docker.go:217] disabling cri-docker service (if available) ...
	I0414 11:56:49.724670  551229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 11:56:49.742147  551229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 11:56:49.757139  551229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 11:56:49.939489  551229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 11:56:50.106669  551229 docker.go:233] disabling docker service ...
	I0414 11:56:50.106784  551229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 11:56:50.126301  551229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 11:56:50.140747  551229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 11:56:50.314783  551229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 11:56:50.477552  551229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 11:56:50.491478  551229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 11:56:50.513833  551229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 11:56:50.513940  551229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.527251  551229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 11:56:50.527342  551229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.539647  551229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.554269  551229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.566888  551229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 11:56:50.578949  551229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.590398  551229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.601633  551229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:56:50.614260  551229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 11:56:50.624068  551229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 11:56:50.633189  551229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:56:50.821613  551229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 11:56:51.147509  551229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 11:56:51.147591  551229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 11:56:51.151984  551229 start.go:563] Will wait 60s for crictl version
	I0414 11:56:51.152044  551229 ssh_runner.go:195] Run: which crictl
	I0414 11:56:51.155843  551229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 11:56:51.186893  551229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 11:56:51.186980  551229 ssh_runner.go:195] Run: crio --version
	I0414 11:56:51.215024  551229 ssh_runner.go:195] Run: crio --version
	I0414 11:56:51.244781  551229 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 11:56:51.245942  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) Calling .GetIP
	I0414 11:56:51.248934  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:51.249282  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:e3:cb", ip: ""} in network mk-kubernetes-upgrade-943444: {Iface:virbr3 ExpiryTime:2025-04-14 12:56:21 +0000 UTC Type:0 Mac:52:54:00:02:e3:cb Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:kubernetes-upgrade-943444 Clientid:01:52:54:00:02:e3:cb}
	I0414 11:56:51.249312  551229 main.go:141] libmachine: (kubernetes-upgrade-943444) DBG | domain kubernetes-upgrade-943444 has defined IP address 192.168.39.2 and MAC address 52:54:00:02:e3:cb in network mk-kubernetes-upgrade-943444
	I0414 11:56:51.249543  551229 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 11:56:51.255275  551229 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 11:56:51.255440  551229 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:56:51.255505  551229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:56:51.301017  551229 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 11:56:51.301038  551229 crio.go:433] Images already preloaded, skipping extraction
	I0414 11:56:51.301094  551229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:56:51.334253  551229 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 11:56:51.334284  551229 cache_images.go:84] Images are preloaded, skipping loading
	I0414 11:56:51.334292  551229 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.32.2 crio true true} ...
	I0414 11:56:51.334440  551229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-943444 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 11:56:51.334513  551229 ssh_runner.go:195] Run: crio config
	I0414 11:56:51.383447  551229 cni.go:84] Creating CNI manager for ""
	I0414 11:56:51.383474  551229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:56:51.383488  551229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 11:56:51.383509  551229 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-943444 NodeName:kubernetes-upgrade-943444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 11:56:51.383636  551229 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-943444"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 11:56:51.383754  551229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 11:56:51.393886  551229 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 11:56:51.393984  551229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 11:56:51.403512  551229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0414 11:56:51.419855  551229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 11:56:51.436082  551229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0414 11:56:51.451740  551229 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I0414 11:56:51.455899  551229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:56:51.575047  551229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:56:51.590875  551229 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444 for IP: 192.168.39.2
	I0414 11:56:51.590908  551229 certs.go:194] generating shared ca certs ...
	I0414 11:56:51.590930  551229 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:56:51.591149  551229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 11:56:51.591224  551229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 11:56:51.591239  551229 certs.go:256] generating profile certs ...
	I0414 11:56:51.591410  551229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/client.key
	I0414 11:56:51.591491  551229 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key.d1b7d982
	I0414 11:56:51.591560  551229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.key
	I0414 11:56:51.591808  551229 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 11:56:51.591856  551229 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 11:56:51.591870  551229 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 11:56:51.591906  551229 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 11:56:51.591938  551229 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 11:56:51.591982  551229 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 11:56:51.592049  551229 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:56:51.592837  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 11:56:51.616213  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 11:56:51.640142  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 11:56:51.663188  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 11:56:51.689950  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 11:56:51.714059  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 11:56:51.736696  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 11:56:51.762625  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kubernetes-upgrade-943444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 11:56:51.788560  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 11:56:51.811959  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 11:56:51.836529  551229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 11:56:51.862274  551229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 11:56:51.883965  551229 ssh_runner.go:195] Run: openssl version
	I0414 11:56:51.889975  551229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 11:56:51.902841  551229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:56:51.908705  551229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:56:51.908807  551229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:56:51.914783  551229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 11:56:51.923966  551229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 11:56:51.942386  551229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 11:56:51.972996  551229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 11:56:51.973056  551229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 11:56:51.995066  551229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 11:56:52.043172  551229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 11:56:52.081783  551229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 11:56:52.102487  551229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 11:56:52.102550  551229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 11:56:52.141046  551229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 11:56:52.172403  551229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 11:56:52.181067  551229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 11:56:52.194734  551229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 11:56:52.204601  551229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 11:56:52.212137  551229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 11:56:52.224531  551229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 11:56:52.230462  551229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 11:56:52.236834  551229 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-943444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-943444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:56:52.236981  551229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 11:56:52.237036  551229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 11:56:48.251442  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:48.750533  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:49.251188  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:49.751081  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:50.251473  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:50.751167  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:51.250670  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:51.751206  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:52.251380  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:52.750794  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:56:49.203886  550493 pod_ready.go:103] pod "coredns-668d6bf9bc-9bh6l" in "kube-system" namespace has status "Ready":"False"
	I0414 11:56:51.700982  550493 pod_ready.go:103] pod "coredns-668d6bf9bc-9bh6l" in "kube-system" namespace has status "Ready":"False"
	I0414 11:56:53.702422  550493 pod_ready.go:103] pod "coredns-668d6bf9bc-9bh6l" in "kube-system" namespace has status "Ready":"False"
	I0414 11:56:52.279340  551229 cri.go:89] found id: "5840895828c03cc695bc7c1dc2bd5d834c4bc59d3ceb44aea966e6e18ba8ffa6"
	I0414 11:56:52.279369  551229 cri.go:89] found id: "b9953ed1caf941dd43ddb62610adaea2ed081507d02d91d343e9abd6435597e3"
	I0414 11:56:52.279374  551229 cri.go:89] found id: "100c35f02af84dbd4f1a86f525f40cf2827c8f56aa2e7f34d5d85c05009642e0"
	I0414 11:56:52.279377  551229 cri.go:89] found id: "cd4b0439e08b1fe790d1bb981e257de6853a6cb1474f75c777dab64104419bf8"
	I0414 11:56:52.279380  551229 cri.go:89] found id: "2109285aa3dd80196fe697a9c6ccc09a5ceb0f45f1e4ba9f07b0483c9d32e99c"
	I0414 11:56:52.279383  551229 cri.go:89] found id: "b9b46055df9926abd0b07fcd6e73ea4f88dd0bfb79a89b03276c6613d5d8c550"
	I0414 11:56:52.279385  551229 cri.go:89] found id: ""
	I0414 11:56:52.279439  551229 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-943444 -n kubernetes-upgrade-943444
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-943444 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-943444 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-943444 describe pod storage-provisioner: exit status 1 (68.859717ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-943444 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-943444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-943444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-943444: (1.149816867s)
--- FAIL: TestKubernetesUpgrade (411.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (392.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-066593 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-066593 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m29.33754919s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-066593] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-066593" primary control-plane node in "pause-066593" cluster
	* Updating the running kvm2 "pause-066593" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-066593" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:54:12.847148  549274 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:54:12.847277  549274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:54:12.847283  549274 out.go:358] Setting ErrFile to fd 2...
	I0414 11:54:12.847311  549274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:54:12.847601  549274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:54:12.848323  549274 out.go:352] Setting JSON to false
	I0414 11:54:12.849649  549274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20204,"bootTime":1744611449,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:54:12.849737  549274 start.go:139] virtualization: kvm guest
	I0414 11:54:12.851644  549274 out.go:177] * [pause-066593] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:54:12.853179  549274 notify.go:220] Checking for updates...
	I0414 11:54:12.853197  549274 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:54:12.854389  549274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:54:12.855529  549274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:54:12.856728  549274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:54:12.858471  549274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:54:12.859499  549274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:54:12.861402  549274 config.go:182] Loaded profile config "pause-066593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:54:12.862488  549274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:54:12.862571  549274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:54:12.884179  549274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I0414 11:54:12.884773  549274 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:54:12.885507  549274 main.go:141] libmachine: Using API Version  1
	I0414 11:54:12.885536  549274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:54:12.885956  549274 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:54:12.886176  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:12.886471  549274 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:54:12.886885  549274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:54:12.886930  549274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:54:12.903535  549274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I0414 11:54:12.904190  549274 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:54:12.904846  549274 main.go:141] libmachine: Using API Version  1
	I0414 11:54:12.904872  549274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:54:12.905338  549274 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:54:12.905526  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:12.945856  549274 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 11:54:12.947166  549274 start.go:297] selected driver: kvm2
	I0414 11:54:12.947187  549274 start.go:901] validating driver "kvm2" against &{Name:pause-066593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-066593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.103 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:54:12.947425  549274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:54:12.947799  549274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:54:12.947900  549274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 11:54:12.964430  549274 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 11:54:12.965373  549274 cni.go:84] Creating CNI manager for ""
	I0414 11:54:12.965432  549274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:54:12.965493  549274 start.go:340] cluster config:
	{Name:pause-066593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-066593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.103 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:54:12.965650  549274 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:54:12.967258  549274 out.go:177] * Starting "pause-066593" primary control-plane node in "pause-066593" cluster
	I0414 11:54:12.968586  549274 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:54:12.968631  549274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 11:54:12.968650  549274 cache.go:56] Caching tarball of preloaded images
	I0414 11:54:12.968738  549274 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 11:54:12.968769  549274 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 11:54:12.968935  549274 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/config.json ...
	I0414 11:54:12.969253  549274 start.go:360] acquireMachinesLock for pause-066593: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 11:54:29.735799  549274 start.go:364] duration metric: took 16.76648705s to acquireMachinesLock for "pause-066593"
	I0414 11:54:29.735867  549274 start.go:96] Skipping create...Using existing machine configuration
	I0414 11:54:29.735881  549274 fix.go:54] fixHost starting: 
	I0414 11:54:29.736323  549274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:54:29.736381  549274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:54:29.757795  549274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I0414 11:54:29.758385  549274 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:54:29.758907  549274 main.go:141] libmachine: Using API Version  1
	I0414 11:54:29.758930  549274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:54:29.759323  549274 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:54:29.759544  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:29.759688  549274 main.go:141] libmachine: (pause-066593) Calling .GetState
	I0414 11:54:29.761380  549274 fix.go:112] recreateIfNeeded on pause-066593: state=Running err=<nil>
	W0414 11:54:29.761415  549274 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 11:54:29.763351  549274 out.go:177] * Updating the running kvm2 "pause-066593" VM ...
	I0414 11:54:29.764706  549274 machine.go:93] provisionDockerMachine start ...
	I0414 11:54:29.764737  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:29.764947  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:29.767799  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:29.768397  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:29.768448  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:29.768645  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:29.768827  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:29.768995  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:29.769116  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:29.769270  549274 main.go:141] libmachine: Using SSH client type: native
	I0414 11:54:29.769504  549274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.103 22 <nil> <nil>}
	I0414 11:54:29.769513  549274 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 11:54:29.875430  549274 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-066593
	
	I0414 11:54:29.875464  549274 main.go:141] libmachine: (pause-066593) Calling .GetMachineName
	I0414 11:54:29.875751  549274 buildroot.go:166] provisioning hostname "pause-066593"
	I0414 11:54:29.875783  549274 main.go:141] libmachine: (pause-066593) Calling .GetMachineName
	I0414 11:54:29.876011  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:29.879004  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:29.879450  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:29.879481  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:29.879653  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:29.879857  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:29.880055  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:29.880212  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:29.880409  549274 main.go:141] libmachine: Using SSH client type: native
	I0414 11:54:29.880639  549274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.103 22 <nil> <nil>}
	I0414 11:54:29.880655  549274 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-066593 && echo "pause-066593" | sudo tee /etc/hostname
	I0414 11:54:30.005867  549274 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-066593
	
	I0414 11:54:30.005903  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:30.009058  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.009460  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:30.009494  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.009674  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:30.009896  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:30.010082  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:30.010228  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:30.010424  549274 main.go:141] libmachine: Using SSH client type: native
	I0414 11:54:30.010621  549274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.103 22 <nil> <nil>}
	I0414 11:54:30.010635  549274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-066593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-066593/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-066593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 11:54:30.128772  549274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 11:54:30.128804  549274 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 11:54:30.128838  549274 buildroot.go:174] setting up certificates
	I0414 11:54:30.128850  549274 provision.go:84] configureAuth start
	I0414 11:54:30.128859  549274 main.go:141] libmachine: (pause-066593) Calling .GetMachineName
	I0414 11:54:30.129147  549274 main.go:141] libmachine: (pause-066593) Calling .GetIP
	I0414 11:54:30.131727  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.132114  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:30.132145  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.132403  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:30.135121  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.135670  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:30.135706  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.135877  549274 provision.go:143] copyHostCerts
	I0414 11:54:30.135955  549274 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 11:54:30.135975  549274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 11:54:30.136047  549274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 11:54:30.136260  549274 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 11:54:30.136273  549274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 11:54:30.136301  549274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 11:54:30.136409  549274 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 11:54:30.136421  549274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 11:54:30.136448  549274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 11:54:30.136532  549274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.pause-066593 san=[127.0.0.1 192.168.50.103 localhost minikube pause-066593]
	I0414 11:54:30.615652  549274 provision.go:177] copyRemoteCerts
	I0414 11:54:30.615721  549274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 11:54:30.615753  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:30.619263  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.619817  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:30.619850  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.620030  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:30.620286  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:30.620489  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:30.620649  549274 sshutil.go:53] new ssh client: &{IP:192.168.50.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/pause-066593/id_rsa Username:docker}
	I0414 11:54:30.706206  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 11:54:30.735359  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0414 11:54:30.760915  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 11:54:30.789057  549274 provision.go:87] duration metric: took 660.190011ms to configureAuth
	I0414 11:54:30.789105  549274 buildroot.go:189] setting minikube options for container-runtime
	I0414 11:54:30.789397  549274 config.go:182] Loaded profile config "pause-066593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:54:30.789484  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:30.792724  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.793107  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:30.793144  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:30.793387  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:30.793641  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:30.793856  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:30.794028  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:30.794191  549274 main.go:141] libmachine: Using SSH client type: native
	I0414 11:54:30.794391  549274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.103 22 <nil> <nil>}
	I0414 11:54:30.794405  549274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 11:54:38.274929  549274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 11:54:38.274961  549274 machine.go:96] duration metric: took 8.510234104s to provisionDockerMachine
	I0414 11:54:38.274975  549274 start.go:293] postStartSetup for "pause-066593" (driver="kvm2")
	I0414 11:54:38.274990  549274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 11:54:38.275014  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:38.275470  549274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 11:54:38.275519  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:38.278385  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.278806  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:38.278831  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.279011  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:38.279200  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:38.279397  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:38.279549  549274 sshutil.go:53] new ssh client: &{IP:192.168.50.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/pause-066593/id_rsa Username:docker}
	I0414 11:54:38.363877  549274 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 11:54:38.368898  549274 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 11:54:38.368934  549274 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 11:54:38.369008  549274 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 11:54:38.369083  549274 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 11:54:38.369185  549274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 11:54:38.380180  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:54:38.402473  549274 start.go:296] duration metric: took 127.479093ms for postStartSetup
	I0414 11:54:38.402530  549274 fix.go:56] duration metric: took 8.666649155s for fixHost
	I0414 11:54:38.402557  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:38.405440  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.405771  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:38.405799  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.406056  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:38.406260  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:38.406456  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:38.406588  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:38.406724  549274 main.go:141] libmachine: Using SSH client type: native
	I0414 11:54:38.406958  549274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.103 22 <nil> <nil>}
	I0414 11:54:38.406969  549274 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 11:54:38.516574  549274 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744631678.511836914
	
	I0414 11:54:38.516599  549274 fix.go:216] guest clock: 1744631678.511836914
	I0414 11:54:38.516609  549274 fix.go:229] Guest: 2025-04-14 11:54:38.511836914 +0000 UTC Remote: 2025-04-14 11:54:38.40253542 +0000 UTC m=+25.606241192 (delta=109.301494ms)
	I0414 11:54:38.516659  549274 fix.go:200] guest clock delta is within tolerance: 109.301494ms
	I0414 11:54:38.516665  549274 start.go:83] releasing machines lock for "pause-066593", held for 8.780830546s
	I0414 11:54:38.516698  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:38.517026  549274 main.go:141] libmachine: (pause-066593) Calling .GetIP
	I0414 11:54:38.520241  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.520576  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:38.520596  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.520793  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:38.521415  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:38.521624  549274 main.go:141] libmachine: (pause-066593) Calling .DriverName
	I0414 11:54:38.521715  549274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 11:54:38.521782  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:38.521859  549274 ssh_runner.go:195] Run: cat /version.json
	I0414 11:54:38.521888  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHHostname
	I0414 11:54:38.524591  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.524743  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.524992  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:38.525022  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.525140  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:54:38.525171  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:54:38.525186  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:38.525394  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHPort
	I0414 11:54:38.525404  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:38.525610  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:38.525683  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHKeyPath
	I0414 11:54:38.525770  549274 sshutil.go:53] new ssh client: &{IP:192.168.50.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/pause-066593/id_rsa Username:docker}
	I0414 11:54:38.525819  549274 main.go:141] libmachine: (pause-066593) Calling .GetSSHUsername
	I0414 11:54:38.525961  549274 sshutil.go:53] new ssh client: &{IP:192.168.50.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/pause-066593/id_rsa Username:docker}
	I0414 11:54:38.612164  549274 ssh_runner.go:195] Run: systemctl --version
	I0414 11:54:38.637654  549274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 11:54:38.793502  549274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 11:54:38.823879  549274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 11:54:38.823963  549274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 11:54:38.892414  549274 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 11:54:38.892452  549274 start.go:495] detecting cgroup driver to use...
	I0414 11:54:38.892539  549274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 11:54:38.980920  549274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 11:54:39.049065  549274 docker.go:217] disabling cri-docker service (if available) ...
	I0414 11:54:39.049151  549274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 11:54:39.101871  549274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 11:54:39.163144  549274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 11:54:39.430697  549274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 11:54:39.747707  549274 docker.go:233] disabling docker service ...
	I0414 11:54:39.747795  549274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 11:54:39.836286  549274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 11:54:39.865832  549274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 11:54:40.129708  549274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 11:54:40.333121  549274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 11:54:40.349260  549274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 11:54:40.379491  549274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 11:54:40.379552  549274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.400519  549274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 11:54:40.400595  549274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.415175  549274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.430736  549274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.445410  549274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 11:54:40.461959  549274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.475052  549274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.495243  549274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:54:40.509138  549274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 11:54:40.522696  549274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 11:54:40.537945  549274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:54:40.823378  549274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 11:56:11.474171  549274 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.650751359s)
	I0414 11:56:11.474215  549274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 11:56:11.474285  549274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 11:56:11.483464  549274 start.go:563] Will wait 60s for crictl version
	I0414 11:56:11.483543  549274 ssh_runner.go:195] Run: which crictl
	I0414 11:56:11.489566  549274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 11:56:11.560117  549274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 11:56:11.560206  549274 ssh_runner.go:195] Run: crio --version
	I0414 11:56:11.600037  549274 ssh_runner.go:195] Run: crio --version
	I0414 11:56:11.644677  549274 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 11:56:11.645982  549274 main.go:141] libmachine: (pause-066593) Calling .GetIP
	I0414 11:56:11.649547  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:56:11.650024  549274 main.go:141] libmachine: (pause-066593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:7f:0e", ip: ""} in network mk-pause-066593: {Iface:virbr2 ExpiryTime:2025-04-14 12:53:34 +0000 UTC Type:0 Mac:52:54:00:fc:7f:0e Iaid: IPaddr:192.168.50.103 Prefix:24 Hostname:pause-066593 Clientid:01:52:54:00:fc:7f:0e}
	I0414 11:56:11.650050  549274 main.go:141] libmachine: (pause-066593) DBG | domain pause-066593 has defined IP address 192.168.50.103 and MAC address 52:54:00:fc:7f:0e in network mk-pause-066593
	I0414 11:56:11.650444  549274 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 11:56:11.656725  549274 kubeadm.go:883] updating cluster {Name:pause-066593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-066593 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.103 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 11:56:11.656912  549274 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:56:11.656975  549274 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:56:11.719506  549274 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 11:56:11.719535  549274 crio.go:433] Images already preloaded, skipping extraction
	I0414 11:56:11.719602  549274 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:56:11.767986  549274 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 11:56:11.768023  549274 cache_images.go:84] Images are preloaded, skipping loading
	I0414 11:56:11.768033  549274 kubeadm.go:934] updating node { 192.168.50.103 8443 v1.32.2 crio true true} ...
	I0414 11:56:11.768163  549274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-066593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-066593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 11:56:11.768282  549274 ssh_runner.go:195] Run: crio config
	I0414 11:56:11.845419  549274 cni.go:84] Creating CNI manager for ""
	I0414 11:56:11.845451  549274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 11:56:11.845467  549274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 11:56:11.845505  549274 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.103 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-066593 NodeName:pause-066593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 11:56:11.845722  549274 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-066593"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.103"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.103"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 11:56:11.845845  549274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 11:56:11.860461  549274 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 11:56:11.860555  549274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 11:56:11.874613  549274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0414 11:56:11.899780  549274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 11:56:11.927401  549274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0414 11:56:11.949934  549274 ssh_runner.go:195] Run: grep 192.168.50.103	control-plane.minikube.internal$ /etc/hosts
	I0414 11:56:11.954960  549274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:56:12.149875  549274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:56:12.171091  549274 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593 for IP: 192.168.50.103
	I0414 11:56:12.171121  549274 certs.go:194] generating shared ca certs ...
	I0414 11:56:12.171144  549274 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:56:12.171390  549274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 11:56:12.171453  549274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 11:56:12.171469  549274 certs.go:256] generating profile certs ...
	I0414 11:56:12.171611  549274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/client.key
	I0414 11:56:12.171720  549274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/apiserver.key.32c8cb54
	I0414 11:56:12.171809  549274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/proxy-client.key
	I0414 11:56:12.171986  549274 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 11:56:12.172038  549274 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 11:56:12.172052  549274 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 11:56:12.172089  549274 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 11:56:12.172121  549274 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 11:56:12.172152  549274 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 11:56:12.172217  549274 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 11:56:12.173050  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 11:56:12.215700  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 11:56:12.266921  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 11:56:12.305313  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 11:56:12.336322  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 11:56:12.364040  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 11:56:12.391169  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 11:56:12.418307  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/pause-066593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 11:56:12.445785  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 11:56:12.474840  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 11:56:12.504241  549274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 11:56:12.532760  549274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 11:56:12.551649  549274 ssh_runner.go:195] Run: openssl version
	I0414 11:56:12.559364  549274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 11:56:12.571846  549274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 11:56:12.576479  549274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 11:56:12.576561  549274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 11:56:12.583467  549274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 11:56:12.596631  549274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 11:56:12.610659  549274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 11:56:12.615724  549274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 11:56:12.615816  549274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 11:56:12.623176  549274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 11:56:12.636246  549274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 11:56:12.650698  549274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:56:12.657356  549274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:56:12.657446  549274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:56:12.664224  549274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 11:56:12.675682  549274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 11:56:12.681241  549274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 11:56:12.688766  549274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 11:56:12.695640  549274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 11:56:12.703427  549274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 11:56:12.710232  549274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 11:56:12.718601  549274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 11:56:12.725539  549274 kubeadm.go:392] StartCluster: {Name:pause-066593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-066593 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.103 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:56:12.725708  549274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 11:56:12.725788  549274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 11:56:12.776313  549274 cri.go:89] found id: "94f7966a01a3d69a358730e0714b57e7bc1339e60ffc5004aa7ed0fea8470e16"
	I0414 11:56:12.776332  549274 cri.go:89] found id: "942085bc7c189deb89ef9e5654bda878aa24c03e7434a720590255a60b878150"
	I0414 11:56:12.776344  549274 cri.go:89] found id: "19ef471e39530fa50122d9f829aca64884f194a28b4beac8f845742a1c14d0e5"
	I0414 11:56:12.776349  549274 cri.go:89] found id: "92b3d4ca1ab438a54c28c37485770807aab2d34dab3b6464ecd6d7fc8a517202"
	I0414 11:56:12.776354  549274 cri.go:89] found id: "09f25292f1121926cb20cb291b9863c64a55f95373c88be9ff29910f5a3d81be"
	I0414 11:56:12.776358  549274 cri.go:89] found id: "84adfd8e4bdf59c7bc956ef6a27122c918ecdcde2bc1aa4637d9bf06050a0123"
	I0414 11:56:12.776362  549274 cri.go:89] found id: "f12e3fb4f97f802ef548194a559f7cc17c250c9917aed02c43518095e91691aa"
	I0414 11:56:12.776366  549274 cri.go:89] found id: "2fc3ef98b81f5a1ee362d003f3da18f7ec68a1b1985fc852aa97965d20c2f68c"
	I0414 11:56:12.776370  549274 cri.go:89] found id: "14c30990696599ca3b1ac5b478e65569203836dd4c2ca8995e247a4758f91025"
	I0414 11:56:12.776379  549274 cri.go:89] found id: "6997ab0f33f1e7eff080f9a03aff670ec7e2ab03d8b0b01cf22d7340bd78dd5f"
	I0414 11:56:12.776384  549274 cri.go:89] found id: "46f868b3c51d71be56b020bd29d5f20eb56f4243f5544239473c4b8f49fd9c37"
	I0414 11:56:12.776387  549274 cri.go:89] found id: ""
	I0414 11:56:12.776439  549274 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-066593 -n pause-066593
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-066593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-066593 logs -n 25: (1.021872964s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo cat                    | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo cat                    | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo cat                    | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	| start   | -p bridge-948178 --memory=3072                       | bridge-948178         | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:00:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:00:26.396329  559198 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:00:26.396661  559198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:00:26.396676  559198 out.go:358] Setting ErrFile to fd 2...
	I0414 12:00:26.396683  559198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:00:26.397035  559198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:00:26.397899  559198 out.go:352] Setting JSON to false
	I0414 12:00:26.399597  559198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20577,"bootTime":1744611449,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:00:26.399690  559198 start.go:139] virtualization: kvm guest
	I0414 12:00:26.401549  559198 out.go:177] * [bridge-948178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:00:26.403264  559198 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:00:26.403264  559198 notify.go:220] Checking for updates...
	I0414 12:00:26.405846  559198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:00:26.407027  559198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:26.408161  559198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:26.409410  559198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:00:26.410606  559198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:00:26.412382  559198 config.go:182] Loaded profile config "enable-default-cni-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.412478  559198 config.go:182] Loaded profile config "flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.412584  559198 config.go:182] Loaded profile config "pause-066593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.412675  559198 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:00:26.457276  559198 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 12:00:26.458412  559198 start.go:297] selected driver: kvm2
	I0414 12:00:26.458429  559198 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:00:26.458453  559198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:00:26.459228  559198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:00:26.459351  559198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:00:26.479988  559198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:00:26.480058  559198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:00:26.480335  559198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:00:26.480374  559198 cni.go:84] Creating CNI manager for "bridge"
	I0414 12:00:26.480381  559198 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:00:26.480483  559198 start.go:340] cluster config:
	{Name:bridge-948178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-948178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:00:26.480623  559198 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:00:26.482578  559198 out.go:177] * Starting "bridge-948178" primary control-plane node in "bridge-948178" cluster
	I0414 12:00:26.483730  559198 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:00:26.483770  559198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:00:26.483780  559198 cache.go:56] Caching tarball of preloaded images
	I0414 12:00:26.483898  559198 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:00:26.483912  559198 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:00:26.484008  559198 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/config.json ...
	I0414 12:00:26.484039  559198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/config.json: {Name:mk18a844ab90a532226f51306d62ef8609a40e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:26.484213  559198 start.go:360] acquireMachinesLock for bridge-948178: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:00:26.484253  559198 start.go:364] duration metric: took 21.895µs to acquireMachinesLock for "bridge-948178"
	I0414 12:00:26.484274  559198 start.go:93] Provisioning new machine with config: &{Name:bridge-948178 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-948178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:26.484334  559198 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 12:00:22.658284  557124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 12:00:22.664988  557124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 12:00:22.665009  557124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 12:00:22.687160  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 12:00:23.208279  557124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:00:23.208450  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:23.208565  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-948178 minikube.k8s.io/updated_at=2025_04_14T12_00_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4 minikube.k8s.io/name=flannel-948178 minikube.k8s.io/primary=true
	I0414 12:00:23.515930  557124 ops.go:34] apiserver oom_adj: -16
	I0414 12:00:23.515998  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:24.016289  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:24.516966  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:25.017024  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:25.516129  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:26.016669  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:26.516159  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:26.676827  557124 kubeadm.go:1113] duration metric: took 3.468429889s to wait for elevateKubeSystemPrivileges
	I0414 12:00:26.676885  557124 kubeadm.go:394] duration metric: took 15.16133565s to StartCluster
	I0414 12:00:26.676909  557124 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:26.677002  557124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:26.678381  557124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:26.678591  557124 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.207 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:26.678613  557124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 12:00:26.678834  557124 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:00:26.678949  557124 addons.go:69] Setting storage-provisioner=true in profile "flannel-948178"
	I0414 12:00:26.678972  557124 addons.go:238] Setting addon storage-provisioner=true in "flannel-948178"
	I0414 12:00:26.679000  557124 config.go:182] Loaded profile config "flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.679015  557124 addons.go:69] Setting default-storageclass=true in profile "flannel-948178"
	I0414 12:00:26.679039  557124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-948178"
	I0414 12:00:26.679007  557124 host.go:66] Checking if "flannel-948178" exists ...
	I0414 12:00:26.679579  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.679579  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.679670  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.679699  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.681039  557124 out.go:177] * Verifying Kubernetes components...
	I0414 12:00:26.682454  557124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:00:26.699798  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0414 12:00:26.700386  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.701067  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0414 12:00:26.701153  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.701170  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.701633  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.701749  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.702036  557124 main.go:141] libmachine: (flannel-948178) Calling .GetState
	I0414 12:00:26.702201  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.702219  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.702665  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.703250  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.703328  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.706454  557124 addons.go:238] Setting addon default-storageclass=true in "flannel-948178"
	I0414 12:00:26.706502  557124 host.go:66] Checking if "flannel-948178" exists ...
	I0414 12:00:26.706881  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.706924  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.723868  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0414 12:00:26.724322  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.724928  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.724959  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.725389  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.725877  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.725912  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.738476  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0414 12:00:26.739181  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.740061  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.740094  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.741029  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.741284  557124 main.go:141] libmachine: (flannel-948178) Calling .GetState
	I0414 12:00:26.744185  557124 main.go:141] libmachine: (flannel-948178) Calling .DriverName
	I0414 12:00:26.746138  557124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:00:26.747768  557124 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:00:26.747803  557124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:00:26.747861  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHHostname
	I0414 12:00:26.752538  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.752593  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0414 12:00:26.753383  557124 main.go:141] libmachine: (flannel-948178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:1f:cd", ip: ""} in network mk-flannel-948178: {Iface:virbr1 ExpiryTime:2025-04-14 12:59:56 +0000 UTC Type:0 Mac:52:54:00:61:1f:cd Iaid: IPaddr:192.168.61.207 Prefix:24 Hostname:flannel-948178 Clientid:01:52:54:00:61:1f:cd}
	I0414 12:00:26.753411  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined IP address 192.168.61.207 and MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.753412  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.753591  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHPort
	I0414 12:00:26.753820  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHKeyPath
	I0414 12:00:26.754035  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHUsername
	I0414 12:00:26.754301  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.754321  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.754390  557124 sshutil.go:53] new ssh client: &{IP:192.168.61.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/flannel-948178/id_rsa Username:docker}
	I0414 12:00:26.754813  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.755034  557124 main.go:141] libmachine: (flannel-948178) Calling .GetState
	I0414 12:00:26.757003  557124 main.go:141] libmachine: (flannel-948178) Calling .DriverName
	I0414 12:00:26.757353  557124 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:00:26.757377  557124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:00:26.757399  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHHostname
	I0414 12:00:26.760484  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.760735  557124 main.go:141] libmachine: (flannel-948178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:1f:cd", ip: ""} in network mk-flannel-948178: {Iface:virbr1 ExpiryTime:2025-04-14 12:59:56 +0000 UTC Type:0 Mac:52:54:00:61:1f:cd Iaid: IPaddr:192.168.61.207 Prefix:24 Hostname:flannel-948178 Clientid:01:52:54:00:61:1f:cd}
	I0414 12:00:26.760759  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined IP address 192.168.61.207 and MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.761006  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHPort
	I0414 12:00:26.761203  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHKeyPath
	I0414 12:00:26.761386  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHUsername
	I0414 12:00:26.761524  557124 sshutil.go:53] new ssh client: &{IP:192.168.61.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/flannel-948178/id_rsa Username:docker}
	I0414 12:00:27.011962  557124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 12:00:27.012165  557124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:00:27.174745  557124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:00:27.174939  557124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:00:27.480731  557124 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 12:00:27.482040  557124 node_ready.go:35] waiting up to 15m0s for node "flannel-948178" to be "Ready" ...
	I0414 12:00:26.225953  549274 out.go:235]   - Configuring RBAC rules ...
	I0414 12:00:26.226129  549274 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 12:00:26.233878  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 12:00:26.248135  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 12:00:26.257686  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 12:00:26.261944  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 12:00:26.267711  549274 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 12:00:26.552245  549274 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 12:00:27.016006  549274 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 12:00:27.556738  549274 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 12:00:27.556765  549274 kubeadm.go:310] 
	I0414 12:00:27.556849  549274 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 12:00:27.556859  549274 kubeadm.go:310] 
	I0414 12:00:27.556944  549274 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 12:00:27.556958  549274 kubeadm.go:310] 
	I0414 12:00:27.557002  549274 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 12:00:27.557081  549274 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 12:00:27.557153  549274 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 12:00:27.557175  549274 kubeadm.go:310] 
	I0414 12:00:27.557257  549274 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 12:00:27.557266  549274 kubeadm.go:310] 
	I0414 12:00:27.557330  549274 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 12:00:27.557340  549274 kubeadm.go:310] 
	I0414 12:00:27.557439  549274 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 12:00:27.557550  549274 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 12:00:27.557666  549274 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 12:00:27.557691  549274 kubeadm.go:310] 
	I0414 12:00:27.557838  549274 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 12:00:27.557955  549274 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 12:00:27.557973  549274 kubeadm.go:310] 
	I0414 12:00:27.558091  549274 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w1dw4z.5ubj2jd8d03ofny1 \
	I0414 12:00:27.558246  549274 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:218652e93704fc369ec14e3a4540532c3ba9e337011061ef10cc8e1465907a51 \
	I0414 12:00:27.558286  549274 kubeadm.go:310] 	--control-plane 
	I0414 12:00:27.558295  549274 kubeadm.go:310] 
	I0414 12:00:27.558413  549274 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 12:00:27.558425  549274 kubeadm.go:310] 
	I0414 12:00:27.558544  549274 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w1dw4z.5ubj2jd8d03ofny1 \
	I0414 12:00:27.558703  549274 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:218652e93704fc369ec14e3a4540532c3ba9e337011061ef10cc8e1465907a51 
	I0414 12:00:27.559637  549274 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:00:27.559698  549274 cni.go:84] Creating CNI manager for ""
	I0414 12:00:27.559750  549274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:00:27.561457  549274 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:00:27.562842  549274 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:00:27.574626  549274 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:00:27.598966  549274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:00:27.599164  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:27.599276  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-066593 minikube.k8s.io/updated_at=2025_04_14T12_00_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4 minikube.k8s.io/name=pause-066593 minikube.k8s.io/primary=true
	I0414 12:00:27.625515  549274 ops.go:34] apiserver oom_adj: -16
	I0414 12:00:27.745234  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:27.848898  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.848931  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.849474  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.849573  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.849637  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.849667  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.849689  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.850075  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.850093  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.851027  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.851114  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.853947  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.854141  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.854213  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.854246  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.854268  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.855242  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.857077  557124 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0414 12:00:27.857099  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.875255  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.875305  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.875776  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.875803  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.875804  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.878046  557124 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 12:00:26.485925  559198 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 12:00:26.486091  559198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.486150  559198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.504598  559198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0414 12:00:26.505115  559198 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.505793  559198 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.505836  559198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.506222  559198 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.506427  559198 main.go:141] libmachine: (bridge-948178) Calling .GetMachineName
	I0414 12:00:26.506555  559198 main.go:141] libmachine: (bridge-948178) Calling .DriverName
	I0414 12:00:26.506661  559198 start.go:159] libmachine.API.Create for "bridge-948178" (driver="kvm2")
	I0414 12:00:26.506696  559198 client.go:168] LocalClient.Create starting
	I0414 12:00:26.506732  559198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem
	I0414 12:00:26.506771  559198 main.go:141] libmachine: Decoding PEM data...
	I0414 12:00:26.506788  559198 main.go:141] libmachine: Parsing certificate...
	I0414 12:00:26.506889  559198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem
	I0414 12:00:26.506919  559198 main.go:141] libmachine: Decoding PEM data...
	I0414 12:00:26.506948  559198 main.go:141] libmachine: Parsing certificate...
	I0414 12:00:26.506972  559198 main.go:141] libmachine: Running pre-create checks...
	I0414 12:00:26.506996  559198 main.go:141] libmachine: (bridge-948178) Calling .PreCreateCheck
	I0414 12:00:26.507354  559198 main.go:141] libmachine: (bridge-948178) Calling .GetConfigRaw
	I0414 12:00:26.507770  559198 main.go:141] libmachine: Creating machine...
	I0414 12:00:26.507788  559198 main.go:141] libmachine: (bridge-948178) Calling .Create
	I0414 12:00:26.507962  559198 main.go:141] libmachine: (bridge-948178) creating KVM machine...
	I0414 12:00:26.507980  559198 main.go:141] libmachine: (bridge-948178) creating network...
	I0414 12:00:26.509569  559198 main.go:141] libmachine: (bridge-948178) DBG | found existing default KVM network
	I0414 12:00:26.510931  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:26.510763  559220 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013ca0}
	I0414 12:00:26.510952  559198 main.go:141] libmachine: (bridge-948178) DBG | created network xml: 
	I0414 12:00:26.510962  559198 main.go:141] libmachine: (bridge-948178) DBG | <network>
	I0414 12:00:26.510984  559198 main.go:141] libmachine: (bridge-948178) DBG |   <name>mk-bridge-948178</name>
	I0414 12:00:26.510995  559198 main.go:141] libmachine: (bridge-948178) DBG |   <dns enable='no'/>
	I0414 12:00:26.511001  559198 main.go:141] libmachine: (bridge-948178) DBG |   
	I0414 12:00:26.511015  559198 main.go:141] libmachine: (bridge-948178) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 12:00:26.511033  559198 main.go:141] libmachine: (bridge-948178) DBG |     <dhcp>
	I0414 12:00:26.511043  559198 main.go:141] libmachine: (bridge-948178) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 12:00:26.511055  559198 main.go:141] libmachine: (bridge-948178) DBG |     </dhcp>
	I0414 12:00:26.511066  559198 main.go:141] libmachine: (bridge-948178) DBG |   </ip>
	I0414 12:00:26.511074  559198 main.go:141] libmachine: (bridge-948178) DBG |   
	I0414 12:00:26.511081  559198 main.go:141] libmachine: (bridge-948178) DBG | </network>
	I0414 12:00:26.511090  559198 main.go:141] libmachine: (bridge-948178) DBG | 
	I0414 12:00:26.516766  559198 main.go:141] libmachine: (bridge-948178) DBG | trying to create private KVM network mk-bridge-948178 192.168.39.0/24...
	I0414 12:00:26.624939  559198 main.go:141] libmachine: (bridge-948178) DBG | private KVM network mk-bridge-948178 192.168.39.0/24 created
	I0414 12:00:26.624976  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:26.624856  559220 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:26.625180  559198 main.go:141] libmachine: (bridge-948178) setting up store path in /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178 ...
	I0414 12:00:26.625208  559198 main.go:141] libmachine: (bridge-948178) building disk image from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:00:26.625227  559198 main.go:141] libmachine: (bridge-948178) Downloading /home/jenkins/minikube-integration/20534-503273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 12:00:26.986299  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:26.986134  559220 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/id_rsa...
	I0414 12:00:27.014092  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:27.013925  559220 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/bridge-948178.rawdisk...
	I0414 12:00:27.014120  559198 main.go:141] libmachine: (bridge-948178) DBG | Writing magic tar header
	I0414 12:00:27.014135  559198 main.go:141] libmachine: (bridge-948178) DBG | Writing SSH key tar header
	I0414 12:00:27.014146  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:27.014113  559220 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178 ...
	I0414 12:00:27.014296  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178 (perms=drwx------)
	I0414 12:00:27.014315  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines (perms=drwxr-xr-x)
	I0414 12:00:27.014329  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube (perms=drwxr-xr-x)
	I0414 12:00:27.014343  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273 (perms=drwxrwxr-x)
	I0414 12:00:27.014360  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 12:00:27.014373  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 12:00:27.014384  559198 main.go:141] libmachine: (bridge-948178) creating domain...
	I0414 12:00:27.014400  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178
	I0414 12:00:27.014408  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines
	I0414 12:00:27.014424  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:27.014437  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273
	I0414 12:00:27.014449  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 12:00:27.014456  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins
	I0414 12:00:27.014469  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home
	I0414 12:00:27.014478  559198 main.go:141] libmachine: (bridge-948178) DBG | skipping /home - not owner
	I0414 12:00:27.016124  559198 main.go:141] libmachine: (bridge-948178) define libvirt domain using xml: 
	I0414 12:00:27.016141  559198 main.go:141] libmachine: (bridge-948178) <domain type='kvm'>
	I0414 12:00:27.016150  559198 main.go:141] libmachine: (bridge-948178)   <name>bridge-948178</name>
	I0414 12:00:27.016160  559198 main.go:141] libmachine: (bridge-948178)   <memory unit='MiB'>3072</memory>
	I0414 12:00:27.016168  559198 main.go:141] libmachine: (bridge-948178)   <vcpu>2</vcpu>
	I0414 12:00:27.016174  559198 main.go:141] libmachine: (bridge-948178)   <features>
	I0414 12:00:27.016187  559198 main.go:141] libmachine: (bridge-948178)     <acpi/>
	I0414 12:00:27.016193  559198 main.go:141] libmachine: (bridge-948178)     <apic/>
	I0414 12:00:27.016214  559198 main.go:141] libmachine: (bridge-948178)     <pae/>
	I0414 12:00:27.016220  559198 main.go:141] libmachine: (bridge-948178)     
	I0414 12:00:27.016228  559198 main.go:141] libmachine: (bridge-948178)   </features>
	I0414 12:00:27.016235  559198 main.go:141] libmachine: (bridge-948178)   <cpu mode='host-passthrough'>
	I0414 12:00:27.016240  559198 main.go:141] libmachine: (bridge-948178)   
	I0414 12:00:27.016246  559198 main.go:141] libmachine: (bridge-948178)   </cpu>
	I0414 12:00:27.016253  559198 main.go:141] libmachine: (bridge-948178)   <os>
	I0414 12:00:27.016259  559198 main.go:141] libmachine: (bridge-948178)     <type>hvm</type>
	I0414 12:00:27.016268  559198 main.go:141] libmachine: (bridge-948178)     <boot dev='cdrom'/>
	I0414 12:00:27.016274  559198 main.go:141] libmachine: (bridge-948178)     <boot dev='hd'/>
	I0414 12:00:27.016282  559198 main.go:141] libmachine: (bridge-948178)     <bootmenu enable='no'/>
	I0414 12:00:27.016287  559198 main.go:141] libmachine: (bridge-948178)   </os>
	I0414 12:00:27.016294  559198 main.go:141] libmachine: (bridge-948178)   <devices>
	I0414 12:00:27.016301  559198 main.go:141] libmachine: (bridge-948178)     <disk type='file' device='cdrom'>
	I0414 12:00:27.016313  559198 main.go:141] libmachine: (bridge-948178)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/boot2docker.iso'/>
	I0414 12:00:27.016320  559198 main.go:141] libmachine: (bridge-948178)       <target dev='hdc' bus='scsi'/>
	I0414 12:00:27.016344  559198 main.go:141] libmachine: (bridge-948178)       <readonly/>
	I0414 12:00:27.016351  559198 main.go:141] libmachine: (bridge-948178)     </disk>
	I0414 12:00:27.016359  559198 main.go:141] libmachine: (bridge-948178)     <disk type='file' device='disk'>
	I0414 12:00:27.016367  559198 main.go:141] libmachine: (bridge-948178)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 12:00:27.016379  559198 main.go:141] libmachine: (bridge-948178)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/bridge-948178.rawdisk'/>
	I0414 12:00:27.016385  559198 main.go:141] libmachine: (bridge-948178)       <target dev='hda' bus='virtio'/>
	I0414 12:00:27.016391  559198 main.go:141] libmachine: (bridge-948178)     </disk>
	I0414 12:00:27.016397  559198 main.go:141] libmachine: (bridge-948178)     <interface type='network'>
	I0414 12:00:27.016405  559198 main.go:141] libmachine: (bridge-948178)       <source network='mk-bridge-948178'/>
	I0414 12:00:27.016412  559198 main.go:141] libmachine: (bridge-948178)       <model type='virtio'/>
	I0414 12:00:27.016419  559198 main.go:141] libmachine: (bridge-948178)     </interface>
	I0414 12:00:27.016427  559198 main.go:141] libmachine: (bridge-948178)     <interface type='network'>
	I0414 12:00:27.016436  559198 main.go:141] libmachine: (bridge-948178)       <source network='default'/>
	I0414 12:00:27.016443  559198 main.go:141] libmachine: (bridge-948178)       <model type='virtio'/>
	I0414 12:00:27.016451  559198 main.go:141] libmachine: (bridge-948178)     </interface>
	I0414 12:00:27.016458  559198 main.go:141] libmachine: (bridge-948178)     <serial type='pty'>
	I0414 12:00:27.016466  559198 main.go:141] libmachine: (bridge-948178)       <target port='0'/>
	I0414 12:00:27.016472  559198 main.go:141] libmachine: (bridge-948178)     </serial>
	I0414 12:00:27.016481  559198 main.go:141] libmachine: (bridge-948178)     <console type='pty'>
	I0414 12:00:27.016489  559198 main.go:141] libmachine: (bridge-948178)       <target type='serial' port='0'/>
	I0414 12:00:27.016496  559198 main.go:141] libmachine: (bridge-948178)     </console>
	I0414 12:00:27.016503  559198 main.go:141] libmachine: (bridge-948178)     <rng model='virtio'>
	I0414 12:00:27.016513  559198 main.go:141] libmachine: (bridge-948178)       <backend model='random'>/dev/random</backend>
	I0414 12:00:27.016521  559198 main.go:141] libmachine: (bridge-948178)     </rng>
	I0414 12:00:27.016528  559198 main.go:141] libmachine: (bridge-948178)     
	I0414 12:00:27.016533  559198 main.go:141] libmachine: (bridge-948178)     
	I0414 12:00:27.016540  559198 main.go:141] libmachine: (bridge-948178)   </devices>
	I0414 12:00:27.016546  559198 main.go:141] libmachine: (bridge-948178) </domain>
	I0414 12:00:27.016556  559198 main.go:141] libmachine: (bridge-948178) 
	I0414 12:00:27.021675  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:27:ee:49 in network default
	I0414 12:00:27.022461  559198 main.go:141] libmachine: (bridge-948178) starting domain...
	I0414 12:00:27.022482  559198 main.go:141] libmachine: (bridge-948178) ensuring networks are active...
	I0414 12:00:27.022500  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:27.023352  559198 main.go:141] libmachine: (bridge-948178) Ensuring network default is active
	I0414 12:00:27.023800  559198 main.go:141] libmachine: (bridge-948178) Ensuring network mk-bridge-948178 is active
	I0414 12:00:27.024528  559198 main.go:141] libmachine: (bridge-948178) getting domain XML...
	I0414 12:00:27.025504  559198 main.go:141] libmachine: (bridge-948178) creating domain...
	I0414 12:00:28.583079  559198 main.go:141] libmachine: (bridge-948178) waiting for IP...
	I0414 12:00:28.584006  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:28.584577  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:28.584607  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:28.584569  559220 retry.go:31] will retry after 229.183608ms: waiting for domain to come up
	I0414 12:00:28.815057  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:28.815605  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:28.815639  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:28.815550  559220 retry.go:31] will retry after 334.13925ms: waiting for domain to come up
	I0414 12:00:29.152077  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:29.152659  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:29.152693  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:29.152614  559220 retry.go:31] will retry after 298.638311ms: waiting for domain to come up
	I0414 12:00:29.453156  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:29.453729  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:29.453752  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:29.453698  559220 retry.go:31] will retry after 603.190901ms: waiting for domain to come up
	I0414 12:00:30.058621  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:30.059252  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:30.059304  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:30.059194  559220 retry.go:31] will retry after 658.644344ms: waiting for domain to come up
	I0414 12:00:30.719846  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:30.720474  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:30.720509  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:30.720433  559220 retry.go:31] will retry after 942.95162ms: waiting for domain to come up
	I0414 12:00:28.245379  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:28.745969  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:29.246330  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:29.745312  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:30.245854  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:30.745749  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:31.246305  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:31.745780  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:31.839742  549274 kubeadm.go:1113] duration metric: took 4.240626698s to wait for elevateKubeSystemPrivileges
	I0414 12:00:31.839796  549274 kubeadm.go:394] duration metric: took 4m19.114269037s to StartCluster
	I0414 12:00:31.839824  549274 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:31.839915  549274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:31.841162  549274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:31.841413  549274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.103 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:31.841552  549274 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:00:31.841754  549274 config.go:182] Loaded profile config "pause-066593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:31.843235  549274 out.go:177] * Verifying Kubernetes components...
	I0414 12:00:31.843335  549274 out.go:177] * Enabled addons: 
	I0414 12:00:27.879195  557124 addons.go:514] duration metric: took 1.20038234s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 12:00:27.994977  557124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-948178" context rescaled to 1 replicas
	I0414 12:00:29.485890  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:31.986771  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:31.844459  549274 addons.go:514] duration metric: took 2.921972ms for enable addons: enabled=[]
	I0414 12:00:31.844503  549274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:00:32.081288  549274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:00:32.110510  549274 node_ready.go:35] waiting up to 6m0s for node "pause-066593" to be "Ready" ...
	I0414 12:00:32.121856  549274 node_ready.go:49] node "pause-066593" has status "Ready":"True"
	I0414 12:00:32.121884  549274 node_ready.go:38] duration metric: took 11.329073ms for node "pause-066593" to be "Ready" ...
	I0414 12:00:32.121896  549274 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:00:32.133616  549274 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:31.665745  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:31.666266  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:31.666314  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:31.666255  559220 retry.go:31] will retry after 928.824569ms: waiting for domain to come up
	I0414 12:00:32.596434  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:32.597113  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:32.597149  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:32.597044  559220 retry.go:31] will retry after 1.012619466s: waiting for domain to come up
	I0414 12:00:33.611586  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:33.612237  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:33.612270  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:33.612188  559220 retry.go:31] will retry after 1.299147937s: waiting for domain to come up
	I0414 12:00:34.913627  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:34.914443  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:34.914478  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:34.914397  559220 retry.go:31] will retry after 2.036180868s: waiting for domain to come up
	I0414 12:00:34.485813  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:36.486300  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:36.986247  557124 node_ready.go:49] node "flannel-948178" has status "Ready":"True"
	I0414 12:00:36.986281  557124 node_ready.go:38] duration metric: took 9.50419715s for node "flannel-948178" to be "Ready" ...
	I0414 12:00:36.986295  557124 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:00:36.990506  557124 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wqh8l" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:34.139145  549274 pod_ready.go:103] pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:36.140739  549274 pod_ready.go:103] pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:37.639976  549274 pod_ready.go:93] pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:37.640014  549274 pod_ready.go:82] duration metric: took 5.506361315s for pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:37.640029  549274 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5crvb" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:37.645072  549274 pod_ready.go:93] pod "coredns-668d6bf9bc-5crvb" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:37.645102  549274 pod_ready.go:82] duration metric: took 5.064335ms for pod "coredns-668d6bf9bc-5crvb" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:37.645117  549274 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:36.952762  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:36.953353  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:36.953383  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:36.953316  559220 retry.go:31] will retry after 2.161331717s: waiting for domain to come up
	I0414 12:00:39.116604  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:39.117144  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:39.117171  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:39.117111  559220 retry.go:31] will retry after 2.644029765s: waiting for domain to come up
	I0414 12:00:39.651260  549274 pod_ready.go:103] pod "etcd-pause-066593" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:41.153606  549274 pod_ready.go:93] pod "etcd-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.153631  549274 pod_ready.go:82] duration metric: took 3.508505825s for pod "etcd-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.153641  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.157750  549274 pod_ready.go:93] pod "kube-apiserver-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.157772  549274 pod_ready.go:82] duration metric: took 4.124912ms for pod "kube-apiserver-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.157786  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.161721  549274 pod_ready.go:93] pod "kube-controller-manager-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.161742  549274 pod_ready.go:82] duration metric: took 3.948902ms for pod "kube-controller-manager-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.161753  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggp22" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.165685  549274 pod_ready.go:93] pod "kube-proxy-ggp22" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.165712  549274 pod_ready.go:82] duration metric: took 3.952452ms for pod "kube-proxy-ggp22" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.165725  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.237393  549274 pod_ready.go:93] pod "kube-scheduler-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.237427  549274 pod_ready.go:82] duration metric: took 71.693021ms for pod "kube-scheduler-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.237439  549274 pod_ready.go:39] duration metric: took 9.115527027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:00:41.237460  549274 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:00:41.237531  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:00:41.254696  549274 api_server.go:72] duration metric: took 9.413246788s to wait for apiserver process to appear ...
	I0414 12:00:41.254728  549274 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:00:41.254752  549274 api_server.go:253] Checking apiserver healthz at https://192.168.50.103:8443/healthz ...
	I0414 12:00:41.259884  549274 api_server.go:279] https://192.168.50.103:8443/healthz returned 200:
	ok
	I0414 12:00:41.260826  549274 api_server.go:141] control plane version: v1.32.2
	I0414 12:00:41.260851  549274 api_server.go:131] duration metric: took 6.115424ms to wait for apiserver health ...
	I0414 12:00:41.260861  549274 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:00:41.437661  549274 system_pods.go:59] 7 kube-system pods found
	I0414 12:00:41.437700  549274 system_pods.go:61] "coredns-668d6bf9bc-2k558" [269c281c-7c8b-4bf6-8127-04c476b3ef79] Running
	I0414 12:00:41.437708  549274 system_pods.go:61] "coredns-668d6bf9bc-5crvb" [9f645792-327d-4c97-8f28-7c50783fa8af] Running
	I0414 12:00:41.437713  549274 system_pods.go:61] "etcd-pause-066593" [93b2c796-790c-4e08-96e6-de23e05b580a] Running
	I0414 12:00:41.437718  549274 system_pods.go:61] "kube-apiserver-pause-066593" [f67ce6ce-2ac7-4d2d-a523-715332d41cd6] Running
	I0414 12:00:41.437723  549274 system_pods.go:61] "kube-controller-manager-pause-066593" [d730d1eb-17ce-424f-aac9-ef23cc5d5088] Running
	I0414 12:00:41.437729  549274 system_pods.go:61] "kube-proxy-ggp22" [c745b3df-3bb6-4de0-acd4-9a541f0aa3e6] Running
	I0414 12:00:41.437734  549274 system_pods.go:61] "kube-scheduler-pause-066593" [8bed515e-c64b-44e6-b527-bc3115a0010e] Running
	I0414 12:00:41.437742  549274 system_pods.go:74] duration metric: took 176.874043ms to wait for pod list to return data ...
	I0414 12:00:41.437753  549274 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:00:41.637230  549274 default_sa.go:45] found service account: "default"
	I0414 12:00:41.637269  549274 default_sa.go:55] duration metric: took 199.505585ms for default service account to be created ...
	I0414 12:00:41.637283  549274 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 12:00:41.838027  549274 system_pods.go:86] 7 kube-system pods found
	I0414 12:00:41.838064  549274 system_pods.go:89] "coredns-668d6bf9bc-2k558" [269c281c-7c8b-4bf6-8127-04c476b3ef79] Running
	I0414 12:00:41.838072  549274 system_pods.go:89] "coredns-668d6bf9bc-5crvb" [9f645792-327d-4c97-8f28-7c50783fa8af] Running
	I0414 12:00:41.838078  549274 system_pods.go:89] "etcd-pause-066593" [93b2c796-790c-4e08-96e6-de23e05b580a] Running
	I0414 12:00:41.838084  549274 system_pods.go:89] "kube-apiserver-pause-066593" [f67ce6ce-2ac7-4d2d-a523-715332d41cd6] Running
	I0414 12:00:41.838089  549274 system_pods.go:89] "kube-controller-manager-pause-066593" [d730d1eb-17ce-424f-aac9-ef23cc5d5088] Running
	I0414 12:00:41.838095  549274 system_pods.go:89] "kube-proxy-ggp22" [c745b3df-3bb6-4de0-acd4-9a541f0aa3e6] Running
	I0414 12:00:41.838102  549274 system_pods.go:89] "kube-scheduler-pause-066593" [8bed515e-c64b-44e6-b527-bc3115a0010e] Running
	I0414 12:00:41.838110  549274 system_pods.go:126] duration metric: took 200.82002ms to wait for k8s-apps to be running ...
	I0414 12:00:41.838119  549274 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 12:00:41.838176  549274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:00:41.853173  549274 system_svc.go:56] duration metric: took 15.041515ms WaitForService to wait for kubelet
	I0414 12:00:41.853216  549274 kubeadm.go:582] duration metric: took 10.011768401s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:00:41.853252  549274 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:00:42.038262  549274 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:00:42.038301  549274 node_conditions.go:123] node cpu capacity is 2
	I0414 12:00:42.038328  549274 node_conditions.go:105] duration metric: took 185.06903ms to run NodePressure ...
	I0414 12:00:42.038345  549274 start.go:241] waiting for startup goroutines ...
	I0414 12:00:42.038356  549274 start.go:246] waiting for cluster config update ...
	I0414 12:00:42.038366  549274 start.go:255] writing updated cluster config ...
	I0414 12:00:42.038678  549274 ssh_runner.go:195] Run: rm -f paused
	I0414 12:00:42.108260  549274 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:00:42.110530  549274 out.go:177] * Done! kubectl is now configured to use "pause-066593" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.752491277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632042752456038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb3507cf-8747-41cd-a791-30ab3956daef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.753013680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ff1a9b8-b8c4-4c5a-bcb2-de43903336cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.753137962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ff1a9b8-b8c4-4c5a-bcb2-de43903336cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.753368070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ff1a9b8-b8c4-4c5a-bcb2-de43903336cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.788056322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55b521e7-78b1-4ae8-bec9-87a97f145d62 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.788150871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55b521e7-78b1-4ae8-bec9-87a97f145d62 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.789099074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0afb2428-1a9e-42cf-aee0-55f43634314c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.789471823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632042789450846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0afb2428-1a9e-42cf-aee0-55f43634314c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.790102878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65f8f7aa-7691-4043-be97-dd1b3e878ebf name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.790163154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65f8f7aa-7691-4043-be97-dd1b3e878ebf name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.790336947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65f8f7aa-7691-4043-be97-dd1b3e878ebf name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.826315416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a68e573-06ba-4765-a373-4aa94f24e075 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.826410668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a68e573-06ba-4765-a373-4aa94f24e075 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.828317616Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28c5ad87-769c-4f7d-ae5f-633e34cf98f6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.828783305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632042828760103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28c5ad87-769c-4f7d-ae5f-633e34cf98f6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.829304010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c01e8026-d48a-4bdb-8ecc-e57bc590f36c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.829354460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c01e8026-d48a-4bdb-8ecc-e57bc590f36c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.829600640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c01e8026-d48a-4bdb-8ecc-e57bc590f36c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.864224094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b12c6f8b-da52-4988-80c7-dabc21ebc091 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.864310573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b12c6f8b-da52-4988-80c7-dabc21ebc091 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.865460548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24ac5e51-6a9d-46c4-bc8e-ddf8530bccef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.865946204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632042865916715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24ac5e51-6a9d-46c4-bc8e-ddf8530bccef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.866618636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5864c14c-c4a7-4374-9700-568934e22a57 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.866679467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5864c14c-c4a7-4374-9700-568934e22a57 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:42 pause-066593 crio[2743]: time="2025-04-14 12:00:42.866977425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5864c14c-c4a7-4374-9700-568934e22a57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16b5bdd3e5450       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   0                   6b68794528998       coredns-668d6bf9bc-2k558
	149b47ce1fb03       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   0                   293d20394c0f3       coredns-668d6bf9bc-5crvb
	87aa7f319bf45       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   10 seconds ago      Running             kube-proxy                0                   82e503ade334c       kube-proxy-ggp22
	5c944475ec9b3       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   21 seconds ago      Running             kube-apiserver            1                   d00be87d9a5b5       kube-apiserver-pause-066593
	6f8d02c906226       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      3                   7c3220e560c80       etcd-pause-066593
	b1b941099bf94       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   21 seconds ago      Running             kube-controller-manager   7                   6b2d674ef151b       kube-controller-manager-pause-066593
	0b9a2f65cf2c7       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   21 seconds ago      Running             kube-scheduler            3                   eaff9f6e10fc3       kube-scheduler-pause-066593
	
	
	==> coredns [149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               pause-066593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-066593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=pause-066593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T12_00_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 12:00:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-066593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 12:00:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.103
	  Hostname:    pause-066593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bfc41bdef014994b1f7eb6ea162d142
	  System UUID:                9bfc41bd-ef01-4994-b1f7-eb6ea162d142
	  Boot ID:                    91d7da9b-b538-4632-a109-87b8e73d2f92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2k558                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12s
	  kube-system                 coredns-668d6bf9bc-5crvb                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12s
	  kube-system                 etcd-pause-066593                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         18s
	  kube-system                 kube-apiserver-pause-066593             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-pause-066593    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-proxy-ggp22                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-scheduler-pause-066593             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (12%)  340Mi (17%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10s   kube-proxy       
	  Normal  Starting                 17s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16s   kubelet          Node pause-066593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s   kubelet          Node pause-066593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s   kubelet          Node pause-066593 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s   node-controller  Node pause-066593 event: Registered Node pause-066593 in Controller
	
	
	==> dmesg <==
	[  +0.144704] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.290594] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.099919] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +4.921018] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.063612] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.984505] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.081614] kauditd_printk_skb: 69 callbacks suppressed
	[Apr14 11:54] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.580437] kauditd_printk_skb: 46 callbacks suppressed
	[ +34.996443] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.534441] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.308029] systemd-fstab-generator[2494]: Ignoring "noauto" option for root device
	[  +0.339805] systemd-fstab-generator[2558]: Ignoring "noauto" option for root device
	[  +0.252565] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[  +0.447690] systemd-fstab-generator[2619]: Ignoring "noauto" option for root device
	[Apr14 11:56] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +0.108136] kauditd_printk_skb: 169 callbacks suppressed
	[  +2.231479] systemd-fstab-generator[3209]: Ignoring "noauto" option for root device
	[ +13.607655] kauditd_printk_skb: 92 callbacks suppressed
	[Apr14 12:00] systemd-fstab-generator[9287]: Ignoring "noauto" option for root device
	[  +6.578300] systemd-fstab-generator[9627]: Ignoring "noauto" option for root device
	[  +0.128794] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.201392] systemd-fstab-generator[9741]: Ignoring "noauto" option for root device
	[  +0.123843] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.491782] kauditd_printk_skb: 66 callbacks suppressed
	
	
	==> etcd [6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c] <==
	{"level":"info","ts":"2025-04-14T12:00:21.964425Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T12:00:21.964831Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"1a05c061259a58e9","initial-advertise-peer-urls":["https://192.168.50.103:2380"],"listen-peer-urls":["https://192.168.50.103:2380"],"advertise-client-urls":["https://192.168.50.103:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.103:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T12:00:21.964892Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T12:00:21.964991Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.103:2380"}
	{"level":"info","ts":"2025-04-14T12:00:21.965013Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.103:2380"}
	{"level":"info","ts":"2025-04-14T12:00:22.206639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-14T12:00:22.206776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-14T12:00:22.206825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 received MsgPreVoteResp from 1a05c061259a58e9 at term 1"}
	{"level":"info","ts":"2025-04-14T12:00:22.206859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 became candidate at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.206907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 received MsgVoteResp from 1a05c061259a58e9 at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.206980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 became leader at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.207014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a05c061259a58e9 elected leader 1a05c061259a58e9 at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.209975Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"1a05c061259a58e9","local-member-attributes":"{Name:pause-066593 ClientURLs:[https://192.168.50.103:2379]}","request-path":"/0/members/1a05c061259a58e9/attributes","cluster-id":"98f45a2b3930cd1c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T12:00:22.210629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:00:22.214755Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T12:00:22.214819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T12:00:22.210742Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.210774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:00:22.220176Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:00:22.225398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.103:2379"}
	{"level":"info","ts":"2025-04-14T12:00:22.238797Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:00:22.248713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98f45a2b3930cd1c","local-member-id":"1a05c061259a58e9","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.248823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.248894Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.250682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:00:43 up 7 min,  0 users,  load average: 0.52, 0.50, 0.28
	Linux pause-066593 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a] <==
	I0414 12:00:24.299157       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 12:00:24.299191       1 policy_source.go:240] refreshing policies
	I0414 12:00:24.301343       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 12:00:24.301395       1 aggregator.go:171] initial CRD sync complete...
	I0414 12:00:24.301411       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 12:00:24.301416       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 12:00:24.301421       1 cache.go:39] Caches are synced for autoregister controller
	I0414 12:00:24.304785       1 controller.go:615] quota admission added evaluator for: namespaces
	E0414 12:00:24.354155       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0414 12:00:24.558067       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 12:00:25.162028       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 12:00:25.170714       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 12:00:25.170829       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 12:00:25.913226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 12:00:25.973968       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 12:00:26.101293       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 12:00:26.108741       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.103]
	I0414 12:00:26.109969       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 12:00:26.114699       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 12:00:26.237005       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 12:00:26.960276       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 12:00:26.992743       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 12:00:27.008451       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 12:00:31.031361       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 12:00:31.082408       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9] <==
	I0414 12:00:30.842191       1 shared_informer.go:320] Caches are synced for PVC protection
	I0414 12:00:30.842476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-066593" podCIDRs=["10.244.0.0/24"]
	I0414 12:00:30.842513       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:30.842601       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:30.854346       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:00:30.891274       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:00:30.891317       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 12:00:30.891366       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 12:00:31.039398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:31.535492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:31.980996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="887.25907ms"
	I0414 12:00:32.000434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.306186ms"
	I0414 12:00:32.002010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.545184ms"
	I0414 12:00:32.028330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="89.581µs"
	I0414 12:00:33.122263       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.879µs"
	I0414 12:00:33.163352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.888µs"
	I0414 12:00:34.025476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.2256ms"
	I0414 12:00:34.026843       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.975µs"
	I0414 12:00:36.411101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.27956ms"
	I0414 12:00:36.411319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="77.053µs"
	I0414 12:00:36.468872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="21.159694ms"
	I0414 12:00:36.469631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="140.235µs"
	I0414 12:00:37.148323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.505481ms"
	I0414 12:00:37.149492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="95.779µs"
	I0414 12:00:37.471671       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	
	
	==> kube-proxy [87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:00:32.373781       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:00:32.388772       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.103"]
	E0414 12:00:32.388855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:00:32.469787       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:00:32.469821       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:00:32.469849       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:00:32.474786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:00:32.477517       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:00:32.477569       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:00:32.480334       1 config.go:199] "Starting service config controller"
	I0414 12:00:32.480353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:00:32.480369       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:00:32.480373       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:00:32.480393       1 config.go:329] "Starting node config controller"
	I0414 12:00:32.480397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:00:32.580468       1 shared_informer.go:320] Caches are synced for node config
	I0414 12:00:32.580513       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:00:32.580526       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f] <==
	W0414 12:00:25.290714       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0414 12:00:25.290754       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.368920       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 12:00:25.369040       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.386328       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 12:00:25.386371       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.434899       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.434953       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.525899       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 12:00:25.527019       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.531854       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.532032       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.547731       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 12:00:25.547774       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.558225       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.558318       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.617168       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 12:00:25.617395       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.626292       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 12:00:25.626437       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.669285       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.669478       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.745276       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 12:00:25.745376       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0414 12:00:28.396952       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 12:00:28 pause-066593 kubelet[9634]: E0414 12:00:28.092865    9634 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-066593\" already exists" pod="kube-system/kube-apiserver-pause-066593"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.165020    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-066593" podStartSLOduration=1.164980302 podStartE2EDuration="1.164980302s" podCreationTimestamp="2025-04-14 12:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.141802661 +0000 UTC m=+1.363496422" watchObservedRunningTime="2025-04-14 12:00:28.164980302 +0000 UTC m=+1.386674063"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.183404    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-066593" podStartSLOduration=1.183387193 podStartE2EDuration="1.183387193s" podCreationTimestamp="2025-04-14 12:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.166840282 +0000 UTC m=+1.388534043" watchObservedRunningTime="2025-04-14 12:00:28.183387193 +0000 UTC m=+1.405080957"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.195144    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-066593" podStartSLOduration=3.195121947 podStartE2EDuration="3.195121947s" podCreationTimestamp="2025-04-14 12:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.183907243 +0000 UTC m=+1.405601002" watchObservedRunningTime="2025-04-14 12:00:28.195121947 +0000 UTC m=+1.416815711"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.208967    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-066593" podStartSLOduration=1.208940726 podStartE2EDuration="1.208940726s" podCreationTimestamp="2025-04-14 12:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.195805171 +0000 UTC m=+1.417498933" watchObservedRunningTime="2025-04-14 12:00:28.208940726 +0000 UTC m=+1.430634488"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165248    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-xtables-lock\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165364    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxlt\" (UniqueName: \"kubernetes.io/projected/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-api-access-tmxlt\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165405    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-lib-modules\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165426    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-proxy\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: E0414 12:00:31.276058    9634 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 14 12:00:31 pause-066593 kubelet[9634]: E0414 12:00:31.276126    9634 projected.go:194] Error preparing data for projected volume kube-api-access-tmxlt for pod kube-system/kube-proxy-ggp22: configmap "kube-root-ca.crt" not found
	Apr 14 12:00:31 pause-066593 kubelet[9634]: E0414 12:00:31.276281    9634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-api-access-tmxlt podName:c745b3df-3bb6-4de0-acd4-9a541f0aa3e6 nodeName:}" failed. No retries permitted until 2025-04-14 12:00:31.776238479 +0000 UTC m=+4.997932221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tmxlt" (UniqueName: "kubernetes.io/projected/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-api-access-tmxlt") pod "kube-proxy-ggp22" (UID: "c745b3df-3bb6-4de0-acd4-9a541f0aa3e6") : configmap "kube-root-ca.crt" not found
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.871264    9634 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.970526    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xdg\" (UniqueName: \"kubernetes.io/projected/9f645792-327d-4c97-8f28-7c50783fa8af-kube-api-access-n6xdg\") pod \"coredns-668d6bf9bc-5crvb\" (UID: \"9f645792-327d-4c97-8f28-7c50783fa8af\") " pod="kube-system/coredns-668d6bf9bc-5crvb"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.970612    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f645792-327d-4c97-8f28-7c50783fa8af-config-volume\") pod \"coredns-668d6bf9bc-5crvb\" (UID: \"9f645792-327d-4c97-8f28-7c50783fa8af\") " pod="kube-system/coredns-668d6bf9bc-5crvb"
	Apr 14 12:00:32 pause-066593 kubelet[9634]: I0414 12:00:32.071795    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/269c281c-7c8b-4bf6-8127-04c476b3ef79-config-volume\") pod \"coredns-668d6bf9bc-2k558\" (UID: \"269c281c-7c8b-4bf6-8127-04c476b3ef79\") " pod="kube-system/coredns-668d6bf9bc-2k558"
	Apr 14 12:00:32 pause-066593 kubelet[9634]: I0414 12:00:32.071865    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8rw\" (UniqueName: \"kubernetes.io/projected/269c281c-7c8b-4bf6-8127-04c476b3ef79-kube-api-access-qk8rw\") pod \"coredns-668d6bf9bc-2k558\" (UID: \"269c281c-7c8b-4bf6-8127-04c476b3ef79\") " pod="kube-system/coredns-668d6bf9bc-2k558"
	Apr 14 12:00:33 pause-066593 kubelet[9634]: I0414 12:00:33.121339    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5crvb" podStartSLOduration=2.121319272 podStartE2EDuration="2.121319272s" podCreationTimestamp="2025-04-14 12:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:33.121095794 +0000 UTC m=+6.342789554" watchObservedRunningTime="2025-04-14 12:00:33.121319272 +0000 UTC m=+6.343013030"
	Apr 14 12:00:33 pause-066593 kubelet[9634]: I0414 12:00:33.144501    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggp22" podStartSLOduration=2.144480486 podStartE2EDuration="2.144480486s" podCreationTimestamp="2025-04-14 12:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:33.143915593 +0000 UTC m=+6.365609355" watchObservedRunningTime="2025-04-14 12:00:33.144480486 +0000 UTC m=+6.366174246"
	Apr 14 12:00:34 pause-066593 kubelet[9634]: I0414 12:00:34.008811    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2k558" podStartSLOduration=3.008785718 podStartE2EDuration="3.008785718s" podCreationTimestamp="2025-04-14 12:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:33.162594201 +0000 UTC m=+6.384287961" watchObservedRunningTime="2025-04-14 12:00:34.008785718 +0000 UTC m=+7.230479482"
	Apr 14 12:00:36 pause-066593 kubelet[9634]: I0414 12:00:36.385254    9634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: E0414 12:00:37.074268    9634 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632037073739179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: E0414 12:00:37.074328    9634 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632037073739179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: I0414 12:00:37.455668    9634 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: I0414 12:00:37.457674    9634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-066593 -n pause-066593
helpers_test.go:261: (dbg) Run:  kubectl --context pause-066593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-066593 -n pause-066593
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-066593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-066593 logs -n 25: (1.071921018s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo cat                    | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo cat                    | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo cat                    | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-948178 sudo                        | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-948178                             | custom-flannel-948178 | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC | 14 Apr 25 12:00 UTC |
	| start   | -p bridge-948178 --memory=3072                       | bridge-948178         | jenkins | v1.35.0 | 14 Apr 25 12:00 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:00:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:00:26.396329  559198 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:00:26.396661  559198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:00:26.396676  559198 out.go:358] Setting ErrFile to fd 2...
	I0414 12:00:26.396683  559198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:00:26.397035  559198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:00:26.397899  559198 out.go:352] Setting JSON to false
	I0414 12:00:26.399597  559198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20577,"bootTime":1744611449,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:00:26.399690  559198 start.go:139] virtualization: kvm guest
	I0414 12:00:26.401549  559198 out.go:177] * [bridge-948178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:00:26.403264  559198 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:00:26.403264  559198 notify.go:220] Checking for updates...
	I0414 12:00:26.405846  559198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:00:26.407027  559198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:26.408161  559198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:26.409410  559198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:00:26.410606  559198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:00:26.412382  559198 config.go:182] Loaded profile config "enable-default-cni-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.412478  559198 config.go:182] Loaded profile config "flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.412584  559198 config.go:182] Loaded profile config "pause-066593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.412675  559198 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:00:26.457276  559198 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 12:00:26.458412  559198 start.go:297] selected driver: kvm2
	I0414 12:00:26.458429  559198 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:00:26.458453  559198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:00:26.459228  559198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:00:26.459351  559198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:00:26.479988  559198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:00:26.480058  559198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:00:26.480335  559198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:00:26.480374  559198 cni.go:84] Creating CNI manager for "bridge"
	I0414 12:00:26.480381  559198 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:00:26.480483  559198 start.go:340] cluster config:
	{Name:bridge-948178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-948178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:00:26.480623  559198 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:00:26.482578  559198 out.go:177] * Starting "bridge-948178" primary control-plane node in "bridge-948178" cluster
	I0414 12:00:26.483730  559198 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:00:26.483770  559198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:00:26.483780  559198 cache.go:56] Caching tarball of preloaded images
	I0414 12:00:26.483898  559198 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:00:26.483912  559198 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:00:26.484008  559198 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/config.json ...
	I0414 12:00:26.484039  559198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/config.json: {Name:mk18a844ab90a532226f51306d62ef8609a40e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:26.484213  559198 start.go:360] acquireMachinesLock for bridge-948178: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:00:26.484253  559198 start.go:364] duration metric: took 21.895µs to acquireMachinesLock for "bridge-948178"
	I0414 12:00:26.484274  559198 start.go:93] Provisioning new machine with config: &{Name:bridge-948178 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-948178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:26.484334  559198 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 12:00:22.658284  557124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 12:00:22.664988  557124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 12:00:22.665009  557124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 12:00:22.687160  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 12:00:23.208279  557124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:00:23.208450  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:23.208565  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-948178 minikube.k8s.io/updated_at=2025_04_14T12_00_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4 minikube.k8s.io/name=flannel-948178 minikube.k8s.io/primary=true
	I0414 12:00:23.515930  557124 ops.go:34] apiserver oom_adj: -16
	I0414 12:00:23.515998  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:24.016289  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:24.516966  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:25.017024  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:25.516129  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:26.016669  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:26.516159  557124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:26.676827  557124 kubeadm.go:1113] duration metric: took 3.468429889s to wait for elevateKubeSystemPrivileges
	I0414 12:00:26.676885  557124 kubeadm.go:394] duration metric: took 15.16133565s to StartCluster
	I0414 12:00:26.676909  557124 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:26.677002  557124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:26.678381  557124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:26.678591  557124 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.207 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:26.678613  557124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 12:00:26.678834  557124 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:00:26.678949  557124 addons.go:69] Setting storage-provisioner=true in profile "flannel-948178"
	I0414 12:00:26.678972  557124 addons.go:238] Setting addon storage-provisioner=true in "flannel-948178"
	I0414 12:00:26.679000  557124 config.go:182] Loaded profile config "flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:26.679015  557124 addons.go:69] Setting default-storageclass=true in profile "flannel-948178"
	I0414 12:00:26.679039  557124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-948178"
	I0414 12:00:26.679007  557124 host.go:66] Checking if "flannel-948178" exists ...
	I0414 12:00:26.679579  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.679579  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.679670  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.679699  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.681039  557124 out.go:177] * Verifying Kubernetes components...
	I0414 12:00:26.682454  557124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:00:26.699798  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0414 12:00:26.700386  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.701067  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0414 12:00:26.701153  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.701170  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.701633  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.701749  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.702036  557124 main.go:141] libmachine: (flannel-948178) Calling .GetState
	I0414 12:00:26.702201  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.702219  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.702665  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.703250  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.703328  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.706454  557124 addons.go:238] Setting addon default-storageclass=true in "flannel-948178"
	I0414 12:00:26.706502  557124 host.go:66] Checking if "flannel-948178" exists ...
	I0414 12:00:26.706881  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.706924  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.723868  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0414 12:00:26.724322  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.724928  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.724959  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.725389  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.725877  557124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.725912  557124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.738476  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0414 12:00:26.739181  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.740061  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.740094  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.741029  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.741284  557124 main.go:141] libmachine: (flannel-948178) Calling .GetState
	I0414 12:00:26.744185  557124 main.go:141] libmachine: (flannel-948178) Calling .DriverName
	I0414 12:00:26.746138  557124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:00:26.747768  557124 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:00:26.747803  557124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:00:26.747861  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHHostname
	I0414 12:00:26.752538  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.752593  557124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0414 12:00:26.753383  557124 main.go:141] libmachine: (flannel-948178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:1f:cd", ip: ""} in network mk-flannel-948178: {Iface:virbr1 ExpiryTime:2025-04-14 12:59:56 +0000 UTC Type:0 Mac:52:54:00:61:1f:cd Iaid: IPaddr:192.168.61.207 Prefix:24 Hostname:flannel-948178 Clientid:01:52:54:00:61:1f:cd}
	I0414 12:00:26.753411  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined IP address 192.168.61.207 and MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.753412  557124 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.753591  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHPort
	I0414 12:00:26.753820  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHKeyPath
	I0414 12:00:26.754035  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHUsername
	I0414 12:00:26.754301  557124 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.754321  557124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.754390  557124 sshutil.go:53] new ssh client: &{IP:192.168.61.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/flannel-948178/id_rsa Username:docker}
	I0414 12:00:26.754813  557124 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.755034  557124 main.go:141] libmachine: (flannel-948178) Calling .GetState
	I0414 12:00:26.757003  557124 main.go:141] libmachine: (flannel-948178) Calling .DriverName
	I0414 12:00:26.757353  557124 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:00:26.757377  557124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:00:26.757399  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHHostname
	I0414 12:00:26.760484  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.760735  557124 main.go:141] libmachine: (flannel-948178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:1f:cd", ip: ""} in network mk-flannel-948178: {Iface:virbr1 ExpiryTime:2025-04-14 12:59:56 +0000 UTC Type:0 Mac:52:54:00:61:1f:cd Iaid: IPaddr:192.168.61.207 Prefix:24 Hostname:flannel-948178 Clientid:01:52:54:00:61:1f:cd}
	I0414 12:00:26.760759  557124 main.go:141] libmachine: (flannel-948178) DBG | domain flannel-948178 has defined IP address 192.168.61.207 and MAC address 52:54:00:61:1f:cd in network mk-flannel-948178
	I0414 12:00:26.761006  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHPort
	I0414 12:00:26.761203  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHKeyPath
	I0414 12:00:26.761386  557124 main.go:141] libmachine: (flannel-948178) Calling .GetSSHUsername
	I0414 12:00:26.761524  557124 sshutil.go:53] new ssh client: &{IP:192.168.61.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/flannel-948178/id_rsa Username:docker}
	I0414 12:00:27.011962  557124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 12:00:27.012165  557124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:00:27.174745  557124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:00:27.174939  557124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:00:27.480731  557124 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 12:00:27.482040  557124 node_ready.go:35] waiting up to 15m0s for node "flannel-948178" to be "Ready" ...
	I0414 12:00:26.225953  549274 out.go:235]   - Configuring RBAC rules ...
	I0414 12:00:26.226129  549274 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 12:00:26.233878  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 12:00:26.248135  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 12:00:26.257686  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 12:00:26.261944  549274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 12:00:26.267711  549274 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 12:00:26.552245  549274 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 12:00:27.016006  549274 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 12:00:27.556738  549274 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 12:00:27.556765  549274 kubeadm.go:310] 
	I0414 12:00:27.556849  549274 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 12:00:27.556859  549274 kubeadm.go:310] 
	I0414 12:00:27.556944  549274 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 12:00:27.556958  549274 kubeadm.go:310] 
	I0414 12:00:27.557002  549274 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 12:00:27.557081  549274 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 12:00:27.557153  549274 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 12:00:27.557175  549274 kubeadm.go:310] 
	I0414 12:00:27.557257  549274 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 12:00:27.557266  549274 kubeadm.go:310] 
	I0414 12:00:27.557330  549274 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 12:00:27.557340  549274 kubeadm.go:310] 
	I0414 12:00:27.557439  549274 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 12:00:27.557550  549274 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 12:00:27.557666  549274 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 12:00:27.557691  549274 kubeadm.go:310] 
	I0414 12:00:27.557838  549274 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 12:00:27.557955  549274 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 12:00:27.557973  549274 kubeadm.go:310] 
	I0414 12:00:27.558091  549274 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w1dw4z.5ubj2jd8d03ofny1 \
	I0414 12:00:27.558246  549274 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:218652e93704fc369ec14e3a4540532c3ba9e337011061ef10cc8e1465907a51 \
	I0414 12:00:27.558286  549274 kubeadm.go:310] 	--control-plane 
	I0414 12:00:27.558295  549274 kubeadm.go:310] 
	I0414 12:00:27.558413  549274 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 12:00:27.558425  549274 kubeadm.go:310] 
	I0414 12:00:27.558544  549274 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w1dw4z.5ubj2jd8d03ofny1 \
	I0414 12:00:27.558703  549274 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:218652e93704fc369ec14e3a4540532c3ba9e337011061ef10cc8e1465907a51 
	I0414 12:00:27.559637  549274 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:00:27.559698  549274 cni.go:84] Creating CNI manager for ""
	I0414 12:00:27.559750  549274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:00:27.561457  549274 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:00:27.562842  549274 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:00:27.574626  549274 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:00:27.598966  549274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:00:27.599164  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:27.599276  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-066593 minikube.k8s.io/updated_at=2025_04_14T12_00_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4 minikube.k8s.io/name=pause-066593 minikube.k8s.io/primary=true
	I0414 12:00:27.625515  549274 ops.go:34] apiserver oom_adj: -16
	I0414 12:00:27.745234  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:27.848898  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.848931  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.849474  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.849573  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.849637  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.849667  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.849689  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.850075  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.850093  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.851027  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.851114  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.853947  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.854141  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.854213  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.854246  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.854268  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.855242  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.857077  557124 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0414 12:00:27.857099  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.875255  557124 main.go:141] libmachine: Making call to close driver server
	I0414 12:00:27.875305  557124 main.go:141] libmachine: (flannel-948178) Calling .Close
	I0414 12:00:27.875776  557124 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:00:27.875803  557124 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:00:27.875804  557124 main.go:141] libmachine: (flannel-948178) DBG | Closing plugin on server side
	I0414 12:00:27.878046  557124 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 12:00:26.485925  559198 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 12:00:26.486091  559198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:26.486150  559198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:26.504598  559198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0414 12:00:26.505115  559198 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:26.505793  559198 main.go:141] libmachine: Using API Version  1
	I0414 12:00:26.505836  559198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:26.506222  559198 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:26.506427  559198 main.go:141] libmachine: (bridge-948178) Calling .GetMachineName
	I0414 12:00:26.506555  559198 main.go:141] libmachine: (bridge-948178) Calling .DriverName
	I0414 12:00:26.506661  559198 start.go:159] libmachine.API.Create for "bridge-948178" (driver="kvm2")
	I0414 12:00:26.506696  559198 client.go:168] LocalClient.Create starting
	I0414 12:00:26.506732  559198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem
	I0414 12:00:26.506771  559198 main.go:141] libmachine: Decoding PEM data...
	I0414 12:00:26.506788  559198 main.go:141] libmachine: Parsing certificate...
	I0414 12:00:26.506889  559198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem
	I0414 12:00:26.506919  559198 main.go:141] libmachine: Decoding PEM data...
	I0414 12:00:26.506948  559198 main.go:141] libmachine: Parsing certificate...
	I0414 12:00:26.506972  559198 main.go:141] libmachine: Running pre-create checks...
	I0414 12:00:26.506996  559198 main.go:141] libmachine: (bridge-948178) Calling .PreCreateCheck
	I0414 12:00:26.507354  559198 main.go:141] libmachine: (bridge-948178) Calling .GetConfigRaw
	I0414 12:00:26.507770  559198 main.go:141] libmachine: Creating machine...
	I0414 12:00:26.507788  559198 main.go:141] libmachine: (bridge-948178) Calling .Create
	I0414 12:00:26.507962  559198 main.go:141] libmachine: (bridge-948178) creating KVM machine...
	I0414 12:00:26.507980  559198 main.go:141] libmachine: (bridge-948178) creating network...
	I0414 12:00:26.509569  559198 main.go:141] libmachine: (bridge-948178) DBG | found existing default KVM network
	I0414 12:00:26.510931  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:26.510763  559220 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013ca0}
	I0414 12:00:26.510952  559198 main.go:141] libmachine: (bridge-948178) DBG | created network xml: 
	I0414 12:00:26.510962  559198 main.go:141] libmachine: (bridge-948178) DBG | <network>
	I0414 12:00:26.510984  559198 main.go:141] libmachine: (bridge-948178) DBG |   <name>mk-bridge-948178</name>
	I0414 12:00:26.510995  559198 main.go:141] libmachine: (bridge-948178) DBG |   <dns enable='no'/>
	I0414 12:00:26.511001  559198 main.go:141] libmachine: (bridge-948178) DBG |   
	I0414 12:00:26.511015  559198 main.go:141] libmachine: (bridge-948178) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 12:00:26.511033  559198 main.go:141] libmachine: (bridge-948178) DBG |     <dhcp>
	I0414 12:00:26.511043  559198 main.go:141] libmachine: (bridge-948178) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 12:00:26.511055  559198 main.go:141] libmachine: (bridge-948178) DBG |     </dhcp>
	I0414 12:00:26.511066  559198 main.go:141] libmachine: (bridge-948178) DBG |   </ip>
	I0414 12:00:26.511074  559198 main.go:141] libmachine: (bridge-948178) DBG |   
	I0414 12:00:26.511081  559198 main.go:141] libmachine: (bridge-948178) DBG | </network>
	I0414 12:00:26.511090  559198 main.go:141] libmachine: (bridge-948178) DBG | 
	I0414 12:00:26.516766  559198 main.go:141] libmachine: (bridge-948178) DBG | trying to create private KVM network mk-bridge-948178 192.168.39.0/24...
	I0414 12:00:26.624939  559198 main.go:141] libmachine: (bridge-948178) DBG | private KVM network mk-bridge-948178 192.168.39.0/24 created
	I0414 12:00:26.624976  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:26.624856  559220 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:26.625180  559198 main.go:141] libmachine: (bridge-948178) setting up store path in /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178 ...
	I0414 12:00:26.625208  559198 main.go:141] libmachine: (bridge-948178) building disk image from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:00:26.625227  559198 main.go:141] libmachine: (bridge-948178) Downloading /home/jenkins/minikube-integration/20534-503273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 12:00:26.986299  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:26.986134  559220 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/id_rsa...
	I0414 12:00:27.014092  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:27.013925  559220 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/bridge-948178.rawdisk...
	I0414 12:00:27.014120  559198 main.go:141] libmachine: (bridge-948178) DBG | Writing magic tar header
	I0414 12:00:27.014135  559198 main.go:141] libmachine: (bridge-948178) DBG | Writing SSH key tar header
	I0414 12:00:27.014146  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:27.014113  559220 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178 ...
	I0414 12:00:27.014296  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178 (perms=drwx------)
	I0414 12:00:27.014315  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines (perms=drwxr-xr-x)
	I0414 12:00:27.014329  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube (perms=drwxr-xr-x)
	I0414 12:00:27.014343  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration/20534-503273 (perms=drwxrwxr-x)
	I0414 12:00:27.014360  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 12:00:27.014373  559198 main.go:141] libmachine: (bridge-948178) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 12:00:27.014384  559198 main.go:141] libmachine: (bridge-948178) creating domain...
	I0414 12:00:27.014400  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178
	I0414 12:00:27.014408  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines
	I0414 12:00:27.014424  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:27.014437  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273
	I0414 12:00:27.014449  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 12:00:27.014456  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home/jenkins
	I0414 12:00:27.014469  559198 main.go:141] libmachine: (bridge-948178) DBG | checking permissions on dir: /home
	I0414 12:00:27.014478  559198 main.go:141] libmachine: (bridge-948178) DBG | skipping /home - not owner
	I0414 12:00:27.016124  559198 main.go:141] libmachine: (bridge-948178) define libvirt domain using xml: 
	I0414 12:00:27.016141  559198 main.go:141] libmachine: (bridge-948178) <domain type='kvm'>
	I0414 12:00:27.016150  559198 main.go:141] libmachine: (bridge-948178)   <name>bridge-948178</name>
	I0414 12:00:27.016160  559198 main.go:141] libmachine: (bridge-948178)   <memory unit='MiB'>3072</memory>
	I0414 12:00:27.016168  559198 main.go:141] libmachine: (bridge-948178)   <vcpu>2</vcpu>
	I0414 12:00:27.016174  559198 main.go:141] libmachine: (bridge-948178)   <features>
	I0414 12:00:27.016187  559198 main.go:141] libmachine: (bridge-948178)     <acpi/>
	I0414 12:00:27.016193  559198 main.go:141] libmachine: (bridge-948178)     <apic/>
	I0414 12:00:27.016214  559198 main.go:141] libmachine: (bridge-948178)     <pae/>
	I0414 12:00:27.016220  559198 main.go:141] libmachine: (bridge-948178)     
	I0414 12:00:27.016228  559198 main.go:141] libmachine: (bridge-948178)   </features>
	I0414 12:00:27.016235  559198 main.go:141] libmachine: (bridge-948178)   <cpu mode='host-passthrough'>
	I0414 12:00:27.016240  559198 main.go:141] libmachine: (bridge-948178)   
	I0414 12:00:27.016246  559198 main.go:141] libmachine: (bridge-948178)   </cpu>
	I0414 12:00:27.016253  559198 main.go:141] libmachine: (bridge-948178)   <os>
	I0414 12:00:27.016259  559198 main.go:141] libmachine: (bridge-948178)     <type>hvm</type>
	I0414 12:00:27.016268  559198 main.go:141] libmachine: (bridge-948178)     <boot dev='cdrom'/>
	I0414 12:00:27.016274  559198 main.go:141] libmachine: (bridge-948178)     <boot dev='hd'/>
	I0414 12:00:27.016282  559198 main.go:141] libmachine: (bridge-948178)     <bootmenu enable='no'/>
	I0414 12:00:27.016287  559198 main.go:141] libmachine: (bridge-948178)   </os>
	I0414 12:00:27.016294  559198 main.go:141] libmachine: (bridge-948178)   <devices>
	I0414 12:00:27.016301  559198 main.go:141] libmachine: (bridge-948178)     <disk type='file' device='cdrom'>
	I0414 12:00:27.016313  559198 main.go:141] libmachine: (bridge-948178)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/boot2docker.iso'/>
	I0414 12:00:27.016320  559198 main.go:141] libmachine: (bridge-948178)       <target dev='hdc' bus='scsi'/>
	I0414 12:00:27.016344  559198 main.go:141] libmachine: (bridge-948178)       <readonly/>
	I0414 12:00:27.016351  559198 main.go:141] libmachine: (bridge-948178)     </disk>
	I0414 12:00:27.016359  559198 main.go:141] libmachine: (bridge-948178)     <disk type='file' device='disk'>
	I0414 12:00:27.016367  559198 main.go:141] libmachine: (bridge-948178)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 12:00:27.016379  559198 main.go:141] libmachine: (bridge-948178)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/bridge-948178/bridge-948178.rawdisk'/>
	I0414 12:00:27.016385  559198 main.go:141] libmachine: (bridge-948178)       <target dev='hda' bus='virtio'/>
	I0414 12:00:27.016391  559198 main.go:141] libmachine: (bridge-948178)     </disk>
	I0414 12:00:27.016397  559198 main.go:141] libmachine: (bridge-948178)     <interface type='network'>
	I0414 12:00:27.016405  559198 main.go:141] libmachine: (bridge-948178)       <source network='mk-bridge-948178'/>
	I0414 12:00:27.016412  559198 main.go:141] libmachine: (bridge-948178)       <model type='virtio'/>
	I0414 12:00:27.016419  559198 main.go:141] libmachine: (bridge-948178)     </interface>
	I0414 12:00:27.016427  559198 main.go:141] libmachine: (bridge-948178)     <interface type='network'>
	I0414 12:00:27.016436  559198 main.go:141] libmachine: (bridge-948178)       <source network='default'/>
	I0414 12:00:27.016443  559198 main.go:141] libmachine: (bridge-948178)       <model type='virtio'/>
	I0414 12:00:27.016451  559198 main.go:141] libmachine: (bridge-948178)     </interface>
	I0414 12:00:27.016458  559198 main.go:141] libmachine: (bridge-948178)     <serial type='pty'>
	I0414 12:00:27.016466  559198 main.go:141] libmachine: (bridge-948178)       <target port='0'/>
	I0414 12:00:27.016472  559198 main.go:141] libmachine: (bridge-948178)     </serial>
	I0414 12:00:27.016481  559198 main.go:141] libmachine: (bridge-948178)     <console type='pty'>
	I0414 12:00:27.016489  559198 main.go:141] libmachine: (bridge-948178)       <target type='serial' port='0'/>
	I0414 12:00:27.016496  559198 main.go:141] libmachine: (bridge-948178)     </console>
	I0414 12:00:27.016503  559198 main.go:141] libmachine: (bridge-948178)     <rng model='virtio'>
	I0414 12:00:27.016513  559198 main.go:141] libmachine: (bridge-948178)       <backend model='random'>/dev/random</backend>
	I0414 12:00:27.016521  559198 main.go:141] libmachine: (bridge-948178)     </rng>
	I0414 12:00:27.016528  559198 main.go:141] libmachine: (bridge-948178)     
	I0414 12:00:27.016533  559198 main.go:141] libmachine: (bridge-948178)     
	I0414 12:00:27.016540  559198 main.go:141] libmachine: (bridge-948178)   </devices>
	I0414 12:00:27.016546  559198 main.go:141] libmachine: (bridge-948178) </domain>
	I0414 12:00:27.016556  559198 main.go:141] libmachine: (bridge-948178) 
	I0414 12:00:27.021675  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:27:ee:49 in network default
	I0414 12:00:27.022461  559198 main.go:141] libmachine: (bridge-948178) starting domain...
	I0414 12:00:27.022482  559198 main.go:141] libmachine: (bridge-948178) ensuring networks are active...
	I0414 12:00:27.022500  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:27.023352  559198 main.go:141] libmachine: (bridge-948178) Ensuring network default is active
	I0414 12:00:27.023800  559198 main.go:141] libmachine: (bridge-948178) Ensuring network mk-bridge-948178 is active
	I0414 12:00:27.024528  559198 main.go:141] libmachine: (bridge-948178) getting domain XML...
	I0414 12:00:27.025504  559198 main.go:141] libmachine: (bridge-948178) creating domain...
	I0414 12:00:28.583079  559198 main.go:141] libmachine: (bridge-948178) waiting for IP...
	I0414 12:00:28.584006  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:28.584577  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:28.584607  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:28.584569  559220 retry.go:31] will retry after 229.183608ms: waiting for domain to come up
	I0414 12:00:28.815057  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:28.815605  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:28.815639  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:28.815550  559220 retry.go:31] will retry after 334.13925ms: waiting for domain to come up
	I0414 12:00:29.152077  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:29.152659  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:29.152693  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:29.152614  559220 retry.go:31] will retry after 298.638311ms: waiting for domain to come up
	I0414 12:00:29.453156  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:29.453729  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:29.453752  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:29.453698  559220 retry.go:31] will retry after 603.190901ms: waiting for domain to come up
	I0414 12:00:30.058621  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:30.059252  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:30.059304  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:30.059194  559220 retry.go:31] will retry after 658.644344ms: waiting for domain to come up
	I0414 12:00:30.719846  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:30.720474  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:30.720509  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:30.720433  559220 retry.go:31] will retry after 942.95162ms: waiting for domain to come up
	I0414 12:00:28.245379  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:28.745969  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:29.246330  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:29.745312  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:30.245854  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:30.745749  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:31.246305  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:31.745780  549274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:00:31.839742  549274 kubeadm.go:1113] duration metric: took 4.240626698s to wait for elevateKubeSystemPrivileges
	I0414 12:00:31.839796  549274 kubeadm.go:394] duration metric: took 4m19.114269037s to StartCluster
	I0414 12:00:31.839824  549274 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:31.839915  549274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:31.841162  549274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:31.841413  549274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.103 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:31.841552  549274 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:00:31.841754  549274 config.go:182] Loaded profile config "pause-066593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:31.843235  549274 out.go:177] * Verifying Kubernetes components...
	I0414 12:00:31.843335  549274 out.go:177] * Enabled addons: 
	I0414 12:00:27.879195  557124 addons.go:514] duration metric: took 1.20038234s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 12:00:27.994977  557124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-948178" context rescaled to 1 replicas
	I0414 12:00:29.485890  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:31.986771  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:31.844459  549274 addons.go:514] duration metric: took 2.921972ms for enable addons: enabled=[]
	I0414 12:00:31.844503  549274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:00:32.081288  549274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:00:32.110510  549274 node_ready.go:35] waiting up to 6m0s for node "pause-066593" to be "Ready" ...
	I0414 12:00:32.121856  549274 node_ready.go:49] node "pause-066593" has status "Ready":"True"
	I0414 12:00:32.121884  549274 node_ready.go:38] duration metric: took 11.329073ms for node "pause-066593" to be "Ready" ...
	I0414 12:00:32.121896  549274 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:00:32.133616  549274 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:31.665745  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:31.666266  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:31.666314  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:31.666255  559220 retry.go:31] will retry after 928.824569ms: waiting for domain to come up
	I0414 12:00:32.596434  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:32.597113  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:32.597149  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:32.597044  559220 retry.go:31] will retry after 1.012619466s: waiting for domain to come up
	I0414 12:00:33.611586  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:33.612237  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:33.612270  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:33.612188  559220 retry.go:31] will retry after 1.299147937s: waiting for domain to come up
	I0414 12:00:34.913627  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:34.914443  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:34.914478  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:34.914397  559220 retry.go:31] will retry after 2.036180868s: waiting for domain to come up
	I0414 12:00:34.485813  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:36.486300  557124 node_ready.go:53] node "flannel-948178" has status "Ready":"False"
	I0414 12:00:36.986247  557124 node_ready.go:49] node "flannel-948178" has status "Ready":"True"
	I0414 12:00:36.986281  557124 node_ready.go:38] duration metric: took 9.50419715s for node "flannel-948178" to be "Ready" ...
	I0414 12:00:36.986295  557124 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:00:36.990506  557124 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wqh8l" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:34.139145  549274 pod_ready.go:103] pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:36.140739  549274 pod_ready.go:103] pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:37.639976  549274 pod_ready.go:93] pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:37.640014  549274 pod_ready.go:82] duration metric: took 5.506361315s for pod "coredns-668d6bf9bc-2k558" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:37.640029  549274 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5crvb" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:37.645072  549274 pod_ready.go:93] pod "coredns-668d6bf9bc-5crvb" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:37.645102  549274 pod_ready.go:82] duration metric: took 5.064335ms for pod "coredns-668d6bf9bc-5crvb" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:37.645117  549274 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:36.952762  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:36.953353  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:36.953383  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:36.953316  559220 retry.go:31] will retry after 2.161331717s: waiting for domain to come up
	I0414 12:00:39.116604  559198 main.go:141] libmachine: (bridge-948178) DBG | domain bridge-948178 has defined MAC address 52:54:00:7b:f7:b1 in network mk-bridge-948178
	I0414 12:00:39.117144  559198 main.go:141] libmachine: (bridge-948178) DBG | unable to find current IP address of domain bridge-948178 in network mk-bridge-948178
	I0414 12:00:39.117171  559198 main.go:141] libmachine: (bridge-948178) DBG | I0414 12:00:39.117111  559220 retry.go:31] will retry after 2.644029765s: waiting for domain to come up
	I0414 12:00:39.651260  549274 pod_ready.go:103] pod "etcd-pause-066593" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:41.153606  549274 pod_ready.go:93] pod "etcd-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.153631  549274 pod_ready.go:82] duration metric: took 3.508505825s for pod "etcd-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.153641  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.157750  549274 pod_ready.go:93] pod "kube-apiserver-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.157772  549274 pod_ready.go:82] duration metric: took 4.124912ms for pod "kube-apiserver-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.157786  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.161721  549274 pod_ready.go:93] pod "kube-controller-manager-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.161742  549274 pod_ready.go:82] duration metric: took 3.948902ms for pod "kube-controller-manager-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.161753  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggp22" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.165685  549274 pod_ready.go:93] pod "kube-proxy-ggp22" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.165712  549274 pod_ready.go:82] duration metric: took 3.952452ms for pod "kube-proxy-ggp22" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.165725  549274 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.237393  549274 pod_ready.go:93] pod "kube-scheduler-pause-066593" in "kube-system" namespace has status "Ready":"True"
	I0414 12:00:41.237427  549274 pod_ready.go:82] duration metric: took 71.693021ms for pod "kube-scheduler-pause-066593" in "kube-system" namespace to be "Ready" ...
	I0414 12:00:41.237439  549274 pod_ready.go:39] duration metric: took 9.115527027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:00:41.237460  549274 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:00:41.237531  549274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:00:41.254696  549274 api_server.go:72] duration metric: took 9.413246788s to wait for apiserver process to appear ...
	I0414 12:00:41.254728  549274 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:00:41.254752  549274 api_server.go:253] Checking apiserver healthz at https://192.168.50.103:8443/healthz ...
	I0414 12:00:41.259884  549274 api_server.go:279] https://192.168.50.103:8443/healthz returned 200:
	ok
	I0414 12:00:41.260826  549274 api_server.go:141] control plane version: v1.32.2
	I0414 12:00:41.260851  549274 api_server.go:131] duration metric: took 6.115424ms to wait for apiserver health ...
	I0414 12:00:41.260861  549274 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:00:41.437661  549274 system_pods.go:59] 7 kube-system pods found
	I0414 12:00:41.437700  549274 system_pods.go:61] "coredns-668d6bf9bc-2k558" [269c281c-7c8b-4bf6-8127-04c476b3ef79] Running
	I0414 12:00:41.437708  549274 system_pods.go:61] "coredns-668d6bf9bc-5crvb" [9f645792-327d-4c97-8f28-7c50783fa8af] Running
	I0414 12:00:41.437713  549274 system_pods.go:61] "etcd-pause-066593" [93b2c796-790c-4e08-96e6-de23e05b580a] Running
	I0414 12:00:41.437718  549274 system_pods.go:61] "kube-apiserver-pause-066593" [f67ce6ce-2ac7-4d2d-a523-715332d41cd6] Running
	I0414 12:00:41.437723  549274 system_pods.go:61] "kube-controller-manager-pause-066593" [d730d1eb-17ce-424f-aac9-ef23cc5d5088] Running
	I0414 12:00:41.437729  549274 system_pods.go:61] "kube-proxy-ggp22" [c745b3df-3bb6-4de0-acd4-9a541f0aa3e6] Running
	I0414 12:00:41.437734  549274 system_pods.go:61] "kube-scheduler-pause-066593" [8bed515e-c64b-44e6-b527-bc3115a0010e] Running
	I0414 12:00:41.437742  549274 system_pods.go:74] duration metric: took 176.874043ms to wait for pod list to return data ...
	I0414 12:00:41.437753  549274 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:00:41.637230  549274 default_sa.go:45] found service account: "default"
	I0414 12:00:41.637269  549274 default_sa.go:55] duration metric: took 199.505585ms for default service account to be created ...
	I0414 12:00:41.637283  549274 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 12:00:41.838027  549274 system_pods.go:86] 7 kube-system pods found
	I0414 12:00:41.838064  549274 system_pods.go:89] "coredns-668d6bf9bc-2k558" [269c281c-7c8b-4bf6-8127-04c476b3ef79] Running
	I0414 12:00:41.838072  549274 system_pods.go:89] "coredns-668d6bf9bc-5crvb" [9f645792-327d-4c97-8f28-7c50783fa8af] Running
	I0414 12:00:41.838078  549274 system_pods.go:89] "etcd-pause-066593" [93b2c796-790c-4e08-96e6-de23e05b580a] Running
	I0414 12:00:41.838084  549274 system_pods.go:89] "kube-apiserver-pause-066593" [f67ce6ce-2ac7-4d2d-a523-715332d41cd6] Running
	I0414 12:00:41.838089  549274 system_pods.go:89] "kube-controller-manager-pause-066593" [d730d1eb-17ce-424f-aac9-ef23cc5d5088] Running
	I0414 12:00:41.838095  549274 system_pods.go:89] "kube-proxy-ggp22" [c745b3df-3bb6-4de0-acd4-9a541f0aa3e6] Running
	I0414 12:00:41.838102  549274 system_pods.go:89] "kube-scheduler-pause-066593" [8bed515e-c64b-44e6-b527-bc3115a0010e] Running
	I0414 12:00:41.838110  549274 system_pods.go:126] duration metric: took 200.82002ms to wait for k8s-apps to be running ...
	I0414 12:00:41.838119  549274 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 12:00:41.838176  549274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:00:41.853173  549274 system_svc.go:56] duration metric: took 15.041515ms WaitForService to wait for kubelet
	I0414 12:00:41.853216  549274 kubeadm.go:582] duration metric: took 10.011768401s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:00:41.853252  549274 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:00:42.038262  549274 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:00:42.038301  549274 node_conditions.go:123] node cpu capacity is 2
	I0414 12:00:42.038328  549274 node_conditions.go:105] duration metric: took 185.06903ms to run NodePressure ...
	I0414 12:00:42.038345  549274 start.go:241] waiting for startup goroutines ...
	I0414 12:00:42.038356  549274 start.go:246] waiting for cluster config update ...
	I0414 12:00:42.038366  549274 start.go:255] writing updated cluster config ...
	I0414 12:00:42.038678  549274 ssh_runner.go:195] Run: rm -f paused
	I0414 12:00:42.108260  549274 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:00:42.110530  549274 out.go:177] * Done! kubectl is now configured to use "pause-066593" cluster and "default" namespace by default
	I0414 12:00:38.996709  557124 pod_ready.go:103] pod "coredns-668d6bf9bc-wqh8l" in "kube-system" namespace has status "Ready":"False"
	I0414 12:00:41.056942  557124 pod_ready.go:103] pod "coredns-668d6bf9bc-wqh8l" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.389843324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632044389818895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9038ca4-f757-4a17-aade-3ce9962f9f5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.390350804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91e47296-6791-4891-a9b6-aa0ceaaee731 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.390408602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91e47296-6791-4891-a9b6-aa0ceaaee731 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.390620429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91e47296-6791-4891-a9b6-aa0ceaaee731 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.429811615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44920869-0412-4eb2-a525-7914a417a8fe name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.429900993Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44920869-0412-4eb2-a525-7914a417a8fe name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.431090804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=765ff099-97bb-4487-a04c-fb4f8aab8731 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.431458201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632044431437292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=765ff099-97bb-4487-a04c-fb4f8aab8731 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.432032380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8228d12e-3989-43b7-aaee-d77b3ab506a8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.432101847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8228d12e-3989-43b7-aaee-d77b3ab506a8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.432264579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8228d12e-3989-43b7-aaee-d77b3ab506a8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.473732044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5667346f-dcf6-495f-baa2-b50b3fd66700 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.473880779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5667346f-dcf6-495f-baa2-b50b3fd66700 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.474997839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86989860-6137-4d15-a890-429dd75c99a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.476867589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632044476828623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86989860-6137-4d15-a890-429dd75c99a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.478729306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6536e5e4-61e8-4945-9ae1-9ad3498becc6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.478786840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6536e5e4-61e8-4945-9ae1-9ad3498becc6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.478976818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6536e5e4-61e8-4945-9ae1-9ad3498becc6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.517923903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27010c23-bfb9-4061-b048-10de8ed6b8af name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.518017074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27010c23-bfb9-4061-b048-10de8ed6b8af name=/runtime.v1.RuntimeService/Version
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.519257939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0881083-cb6d-4052-9c2e-c41c0e1c136b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.519930428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632044519902775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0881083-cb6d-4052-9c2e-c41c0e1c136b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.520476211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17d445e8-fb10-49d8-b5cc-d5a298816216 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.520526900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17d445e8-fb10-49d8-b5cc-d5a298816216 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:00:44 pause-066593 crio[2743]: time="2025-04-14 12:00:44.520831015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd,PodSandboxId:6b687945289982ad8b35e48d090a57899ac912c0e1b5256e7e38edef2349f014,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032687395166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2k558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269c281c-7c8b-4bf6-8127-04c476b3ef79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e,PodSandboxId:293d20394c0f34c7b05b245da1f455b73f5a6441b5c687bb3de104e1361d49ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744632032630331529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5crvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 9f645792-327d-4c97-8f28-7c50783fa8af,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4,PodSandboxId:82e503ade334c4337e35f109ef48d9f555e3409af3c711a940fdc9dca616c46f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,Cr
eatedAt:1744632032145272853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ggp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c745b3df-3bb6-4de0-acd4-9a541f0aa3e6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c,PodSandboxId:7c3220e560c80a94e064b6aca27631d825b2d4917b9409eead40f2e1075dbbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744632021465380161,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea36af9086731d550a4d8fc22acc4b4d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a,PodSandboxId:d00be87d9a5b58316806f82194687fe632983e5e40f535cc35d39af6fff6d3ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744632021482760171,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a7a8e77e276bff9e119e41de5bb363,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9,PodSandboxId:6b2d674ef151b81b5ad587e0f8cb836fe07774e4c6ce17b9c8dc858b58b52d5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744632021424401505,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da68646d6ca3a700bf803283959ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f,PodSandboxId:eaff9f6e10fc393e3fb35703d3f12ffea3bb66aa911e04e956e037849e7f1654,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744632021381669638,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-066593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2817158dd27d7937cf6b27fa907c7baf,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17d445e8-fb10-49d8-b5cc-d5a298816216 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16b5bdd3e5450       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   0                   6b68794528998       coredns-668d6bf9bc-2k558
	149b47ce1fb03       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   0                   293d20394c0f3       coredns-668d6bf9bc-5crvb
	87aa7f319bf45       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 seconds ago      Running             kube-proxy                0                   82e503ade334c       kube-proxy-ggp22
	5c944475ec9b3       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   23 seconds ago      Running             kube-apiserver            1                   d00be87d9a5b5       kube-apiserver-pause-066593
	6f8d02c906226       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago      Running             etcd                      3                   7c3220e560c80       etcd-pause-066593
	b1b941099bf94       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   23 seconds ago      Running             kube-controller-manager   7                   6b2d674ef151b       kube-controller-manager-pause-066593
	0b9a2f65cf2c7       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   23 seconds ago      Running             kube-scheduler            3                   eaff9f6e10fc3       kube-scheduler-pause-066593
	
	
	==> coredns [149b47ce1fb032a67da8dfb58a045d0924d437d74132d404c3a71fd6b196866e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [16b5bdd3e5450d4bd163c8cca80c74ff815faf7e8cad6bd71f4db69cdae803fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               pause-066593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-066593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=pause-066593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T12_00_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 12:00:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-066593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 12:00:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 12:00:37 +0000   Mon, 14 Apr 2025 12:00:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.103
	  Hostname:    pause-066593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bfc41bdef014994b1f7eb6ea162d142
	  System UUID:                9bfc41bd-ef01-4994-b1f7-eb6ea162d142
	  Boot ID:                    91d7da9b-b538-4632-a109-87b8e73d2f92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2k558                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13s
	  kube-system                 coredns-668d6bf9bc-5crvb                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13s
	  kube-system                 etcd-pause-066593                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         19s
	  kube-system                 kube-apiserver-pause-066593             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-pause-066593    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-ggp22                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-scheduler-pause-066593             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (12%)  340Mi (17%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12s   kube-proxy       
	  Normal  Starting                 18s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17s   kubelet          Node pause-066593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s   kubelet          Node pause-066593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s   kubelet          Node pause-066593 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s   node-controller  Node pause-066593 event: Registered Node pause-066593 in Controller
	
	
	==> dmesg <==
	[  +0.144704] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.290594] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.099919] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +4.921018] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.063612] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.984505] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.081614] kauditd_printk_skb: 69 callbacks suppressed
	[Apr14 11:54] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.580437] kauditd_printk_skb: 46 callbacks suppressed
	[ +34.996443] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.534441] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.308029] systemd-fstab-generator[2494]: Ignoring "noauto" option for root device
	[  +0.339805] systemd-fstab-generator[2558]: Ignoring "noauto" option for root device
	[  +0.252565] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[  +0.447690] systemd-fstab-generator[2619]: Ignoring "noauto" option for root device
	[Apr14 11:56] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +0.108136] kauditd_printk_skb: 169 callbacks suppressed
	[  +2.231479] systemd-fstab-generator[3209]: Ignoring "noauto" option for root device
	[ +13.607655] kauditd_printk_skb: 92 callbacks suppressed
	[Apr14 12:00] systemd-fstab-generator[9287]: Ignoring "noauto" option for root device
	[  +6.578300] systemd-fstab-generator[9627]: Ignoring "noauto" option for root device
	[  +0.128794] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.201392] systemd-fstab-generator[9741]: Ignoring "noauto" option for root device
	[  +0.123843] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.491782] kauditd_printk_skb: 66 callbacks suppressed
	
	
	==> etcd [6f8d02c906226bf2cbc6aa64eb4ab66788c53a34eb10a372c0ebd5ddc3124b6c] <==
	{"level":"info","ts":"2025-04-14T12:00:21.964425Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T12:00:21.964831Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"1a05c061259a58e9","initial-advertise-peer-urls":["https://192.168.50.103:2380"],"listen-peer-urls":["https://192.168.50.103:2380"],"advertise-client-urls":["https://192.168.50.103:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.103:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T12:00:21.964892Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T12:00:21.964991Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.103:2380"}
	{"level":"info","ts":"2025-04-14T12:00:21.965013Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.103:2380"}
	{"level":"info","ts":"2025-04-14T12:00:22.206639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-14T12:00:22.206776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-14T12:00:22.206825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 received MsgPreVoteResp from 1a05c061259a58e9 at term 1"}
	{"level":"info","ts":"2025-04-14T12:00:22.206859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 became candidate at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.206907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 received MsgVoteResp from 1a05c061259a58e9 at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.206980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a05c061259a58e9 became leader at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.207014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a05c061259a58e9 elected leader 1a05c061259a58e9 at term 2"}
	{"level":"info","ts":"2025-04-14T12:00:22.209975Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"1a05c061259a58e9","local-member-attributes":"{Name:pause-066593 ClientURLs:[https://192.168.50.103:2379]}","request-path":"/0/members/1a05c061259a58e9/attributes","cluster-id":"98f45a2b3930cd1c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T12:00:22.210629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:00:22.214755Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T12:00:22.214819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T12:00:22.210742Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.210774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:00:22.220176Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:00:22.225398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.103:2379"}
	{"level":"info","ts":"2025-04-14T12:00:22.238797Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:00:22.248713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98f45a2b3930cd1c","local-member-id":"1a05c061259a58e9","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.248823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.248894Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T12:00:22.250682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:00:44 up 7 min,  0 users,  load average: 0.56, 0.51, 0.28
	Linux pause-066593 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5c944475ec9b38ee7f1c3b8fbb096113195df08a9435a8ccd72d7e006812d37a] <==
	I0414 12:00:24.299157       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 12:00:24.299191       1 policy_source.go:240] refreshing policies
	I0414 12:00:24.301343       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 12:00:24.301395       1 aggregator.go:171] initial CRD sync complete...
	I0414 12:00:24.301411       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 12:00:24.301416       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 12:00:24.301421       1 cache.go:39] Caches are synced for autoregister controller
	I0414 12:00:24.304785       1 controller.go:615] quota admission added evaluator for: namespaces
	E0414 12:00:24.354155       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0414 12:00:24.558067       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 12:00:25.162028       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 12:00:25.170714       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 12:00:25.170829       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 12:00:25.913226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 12:00:25.973968       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 12:00:26.101293       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 12:00:26.108741       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.103]
	I0414 12:00:26.109969       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 12:00:26.114699       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 12:00:26.237005       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 12:00:26.960276       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 12:00:26.992743       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 12:00:27.008451       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 12:00:31.031361       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 12:00:31.082408       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b1b941099bf94766d81971e0e6ffb83a64485da2af5297a863e5ed31052a01a9] <==
	I0414 12:00:30.842191       1 shared_informer.go:320] Caches are synced for PVC protection
	I0414 12:00:30.842476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-066593" podCIDRs=["10.244.0.0/24"]
	I0414 12:00:30.842513       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:30.842601       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:30.854346       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:00:30.891274       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:00:30.891317       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 12:00:30.891366       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 12:00:31.039398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:31.535492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	I0414 12:00:31.980996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="887.25907ms"
	I0414 12:00:32.000434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.306186ms"
	I0414 12:00:32.002010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.545184ms"
	I0414 12:00:32.028330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="89.581µs"
	I0414 12:00:33.122263       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.879µs"
	I0414 12:00:33.163352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.888µs"
	I0414 12:00:34.025476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.2256ms"
	I0414 12:00:34.026843       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.975µs"
	I0414 12:00:36.411101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.27956ms"
	I0414 12:00:36.411319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="77.053µs"
	I0414 12:00:36.468872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="21.159694ms"
	I0414 12:00:36.469631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="140.235µs"
	I0414 12:00:37.148323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.505481ms"
	I0414 12:00:37.149492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="95.779µs"
	I0414 12:00:37.471671       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-066593"
	
	
	==> kube-proxy [87aa7f319bf45ac618020ba2c806efddf64f11382be7810f4266074ef2477ca4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:00:32.373781       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:00:32.388772       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.103"]
	E0414 12:00:32.388855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:00:32.469787       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:00:32.469821       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:00:32.469849       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:00:32.474786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:00:32.477517       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:00:32.477569       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:00:32.480334       1 config.go:199] "Starting service config controller"
	I0414 12:00:32.480353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:00:32.480369       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:00:32.480373       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:00:32.480393       1 config.go:329] "Starting node config controller"
	I0414 12:00:32.480397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:00:32.580468       1 shared_informer.go:320] Caches are synced for node config
	I0414 12:00:32.580513       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:00:32.580526       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0b9a2f65cf2c7a99c87b2ce4afe3b01e0c6df09ac317af70e05c92612a34545f] <==
	W0414 12:00:25.290714       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0414 12:00:25.290754       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.368920       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 12:00:25.369040       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.386328       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 12:00:25.386371       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.434899       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.434953       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.525899       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 12:00:25.527019       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.531854       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.532032       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.547731       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 12:00:25.547774       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.558225       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.558318       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.617168       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 12:00:25.617395       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.626292       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 12:00:25.626437       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.669285       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 12:00:25.669478       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:00:25.745276       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 12:00:25.745376       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0414 12:00:28.396952       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 12:00:28 pause-066593 kubelet[9634]: E0414 12:00:28.092865    9634 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-066593\" already exists" pod="kube-system/kube-apiserver-pause-066593"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.165020    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-066593" podStartSLOduration=1.164980302 podStartE2EDuration="1.164980302s" podCreationTimestamp="2025-04-14 12:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.141802661 +0000 UTC m=+1.363496422" watchObservedRunningTime="2025-04-14 12:00:28.164980302 +0000 UTC m=+1.386674063"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.183404    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-066593" podStartSLOduration=1.183387193 podStartE2EDuration="1.183387193s" podCreationTimestamp="2025-04-14 12:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.166840282 +0000 UTC m=+1.388534043" watchObservedRunningTime="2025-04-14 12:00:28.183387193 +0000 UTC m=+1.405080957"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.195144    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-066593" podStartSLOduration=3.195121947 podStartE2EDuration="3.195121947s" podCreationTimestamp="2025-04-14 12:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.183907243 +0000 UTC m=+1.405601002" watchObservedRunningTime="2025-04-14 12:00:28.195121947 +0000 UTC m=+1.416815711"
	Apr 14 12:00:28 pause-066593 kubelet[9634]: I0414 12:00:28.208967    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-066593" podStartSLOduration=1.208940726 podStartE2EDuration="1.208940726s" podCreationTimestamp="2025-04-14 12:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:28.195805171 +0000 UTC m=+1.417498933" watchObservedRunningTime="2025-04-14 12:00:28.208940726 +0000 UTC m=+1.430634488"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165248    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-xtables-lock\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165364    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxlt\" (UniqueName: \"kubernetes.io/projected/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-api-access-tmxlt\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165405    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-lib-modules\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.165426    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-proxy\") pod \"kube-proxy-ggp22\" (UID: \"c745b3df-3bb6-4de0-acd4-9a541f0aa3e6\") " pod="kube-system/kube-proxy-ggp22"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: E0414 12:00:31.276058    9634 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 14 12:00:31 pause-066593 kubelet[9634]: E0414 12:00:31.276126    9634 projected.go:194] Error preparing data for projected volume kube-api-access-tmxlt for pod kube-system/kube-proxy-ggp22: configmap "kube-root-ca.crt" not found
	Apr 14 12:00:31 pause-066593 kubelet[9634]: E0414 12:00:31.276281    9634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-api-access-tmxlt podName:c745b3df-3bb6-4de0-acd4-9a541f0aa3e6 nodeName:}" failed. No retries permitted until 2025-04-14 12:00:31.776238479 +0000 UTC m=+4.997932221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tmxlt" (UniqueName: "kubernetes.io/projected/c745b3df-3bb6-4de0-acd4-9a541f0aa3e6-kube-api-access-tmxlt") pod "kube-proxy-ggp22" (UID: "c745b3df-3bb6-4de0-acd4-9a541f0aa3e6") : configmap "kube-root-ca.crt" not found
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.871264    9634 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.970526    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xdg\" (UniqueName: \"kubernetes.io/projected/9f645792-327d-4c97-8f28-7c50783fa8af-kube-api-access-n6xdg\") pod \"coredns-668d6bf9bc-5crvb\" (UID: \"9f645792-327d-4c97-8f28-7c50783fa8af\") " pod="kube-system/coredns-668d6bf9bc-5crvb"
	Apr 14 12:00:31 pause-066593 kubelet[9634]: I0414 12:00:31.970612    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f645792-327d-4c97-8f28-7c50783fa8af-config-volume\") pod \"coredns-668d6bf9bc-5crvb\" (UID: \"9f645792-327d-4c97-8f28-7c50783fa8af\") " pod="kube-system/coredns-668d6bf9bc-5crvb"
	Apr 14 12:00:32 pause-066593 kubelet[9634]: I0414 12:00:32.071795    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/269c281c-7c8b-4bf6-8127-04c476b3ef79-config-volume\") pod \"coredns-668d6bf9bc-2k558\" (UID: \"269c281c-7c8b-4bf6-8127-04c476b3ef79\") " pod="kube-system/coredns-668d6bf9bc-2k558"
	Apr 14 12:00:32 pause-066593 kubelet[9634]: I0414 12:00:32.071865    9634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8rw\" (UniqueName: \"kubernetes.io/projected/269c281c-7c8b-4bf6-8127-04c476b3ef79-kube-api-access-qk8rw\") pod \"coredns-668d6bf9bc-2k558\" (UID: \"269c281c-7c8b-4bf6-8127-04c476b3ef79\") " pod="kube-system/coredns-668d6bf9bc-2k558"
	Apr 14 12:00:33 pause-066593 kubelet[9634]: I0414 12:00:33.121339    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5crvb" podStartSLOduration=2.121319272 podStartE2EDuration="2.121319272s" podCreationTimestamp="2025-04-14 12:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:33.121095794 +0000 UTC m=+6.342789554" watchObservedRunningTime="2025-04-14 12:00:33.121319272 +0000 UTC m=+6.343013030"
	Apr 14 12:00:33 pause-066593 kubelet[9634]: I0414 12:00:33.144501    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggp22" podStartSLOduration=2.144480486 podStartE2EDuration="2.144480486s" podCreationTimestamp="2025-04-14 12:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:33.143915593 +0000 UTC m=+6.365609355" watchObservedRunningTime="2025-04-14 12:00:33.144480486 +0000 UTC m=+6.366174246"
	Apr 14 12:00:34 pause-066593 kubelet[9634]: I0414 12:00:34.008811    9634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2k558" podStartSLOduration=3.008785718 podStartE2EDuration="3.008785718s" podCreationTimestamp="2025-04-14 12:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 12:00:33.162594201 +0000 UTC m=+6.384287961" watchObservedRunningTime="2025-04-14 12:00:34.008785718 +0000 UTC m=+7.230479482"
	Apr 14 12:00:36 pause-066593 kubelet[9634]: I0414 12:00:36.385254    9634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: E0414 12:00:37.074268    9634 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632037073739179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: E0414 12:00:37.074328    9634 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632037073739179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: I0414 12:00:37.455668    9634 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 14 12:00:37 pause-066593 kubelet[9634]: I0414 12:00:37.457674    9634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-066593 -n pause-066593
helpers_test.go:261: (dbg) Run:  kubectl --context pause-066593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (392.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (282.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-071646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-071646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m42.365988048s)

                                                
                                                
-- stdout --
	* [old-k8s-version-071646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-071646" primary control-plane node in "old-k8s-version-071646" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 12:00:46.521521  559803 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:00:46.521644  559803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:00:46.521650  559803 out.go:358] Setting ErrFile to fd 2...
	I0414 12:00:46.521653  559803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:00:46.521825  559803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:00:46.522431  559803 out.go:352] Setting JSON to false
	I0414 12:00:46.523483  559803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20597,"bootTime":1744611449,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:00:46.523605  559803 start.go:139] virtualization: kvm guest
	I0414 12:00:46.525520  559803 out.go:177] * [old-k8s-version-071646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:00:46.526786  559803 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:00:46.526787  559803 notify.go:220] Checking for updates...
	I0414 12:00:46.529232  559803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:00:46.530308  559803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:00:46.531308  559803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:46.532522  559803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:00:46.533698  559803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:00:46.535429  559803 config.go:182] Loaded profile config "bridge-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:46.535532  559803 config.go:182] Loaded profile config "enable-default-cni-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:46.535642  559803 config.go:182] Loaded profile config "flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:00:46.535742  559803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:00:46.580709  559803 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 12:00:46.581693  559803 start.go:297] selected driver: kvm2
	I0414 12:00:46.581708  559803 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:00:46.581721  559803 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:00:46.582463  559803 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:00:46.582590  559803 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:00:46.599109  559803 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:00:46.599162  559803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:00:46.599435  559803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:00:46.599476  559803 cni.go:84] Creating CNI manager for ""
	I0414 12:00:46.599519  559803 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:00:46.599527  559803 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:00:46.599579  559803 start.go:340] cluster config:
	{Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:00:46.599680  559803 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:00:46.602089  559803 out.go:177] * Starting "old-k8s-version-071646" primary control-plane node in "old-k8s-version-071646" cluster
	I0414 12:00:46.603248  559803 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:00:46.603353  559803 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 12:00:46.603367  559803 cache.go:56] Caching tarball of preloaded images
	I0414 12:00:46.603463  559803 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:00:46.603474  559803 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 12:00:46.603585  559803 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/config.json ...
	I0414 12:00:46.603616  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/config.json: {Name:mk867fcca352ba46f0ee3ad07e4d5ec087a5da9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:00:46.603793  559803 start.go:360] acquireMachinesLock for old-k8s-version-071646: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:00:51.944282  559803 start.go:364] duration metric: took 5.34045417s to acquireMachinesLock for "old-k8s-version-071646"
	I0414 12:00:51.944345  559803 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:00:51.944507  559803 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 12:00:51.946397  559803 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 12:00:51.946603  559803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:00:51.946671  559803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:00:51.966467  559803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0414 12:00:51.967003  559803 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:00:51.967733  559803 main.go:141] libmachine: Using API Version  1
	I0414 12:00:51.967770  559803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:00:51.968191  559803 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:00:51.968419  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:00:51.968597  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:00:51.968779  559803 start.go:159] libmachine.API.Create for "old-k8s-version-071646" (driver="kvm2")
	I0414 12:00:51.968818  559803 client.go:168] LocalClient.Create starting
	I0414 12:00:51.968875  559803 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem
	I0414 12:00:51.968918  559803 main.go:141] libmachine: Decoding PEM data...
	I0414 12:00:51.968938  559803 main.go:141] libmachine: Parsing certificate...
	I0414 12:00:51.969027  559803 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem
	I0414 12:00:51.969067  559803 main.go:141] libmachine: Decoding PEM data...
	I0414 12:00:51.969084  559803 main.go:141] libmachine: Parsing certificate...
	I0414 12:00:51.969109  559803 main.go:141] libmachine: Running pre-create checks...
	I0414 12:00:51.969122  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .PreCreateCheck
	I0414 12:00:51.969592  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetConfigRaw
	I0414 12:00:51.970240  559803 main.go:141] libmachine: Creating machine...
	I0414 12:00:51.970259  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .Create
	I0414 12:00:51.970461  559803 main.go:141] libmachine: (old-k8s-version-071646) creating KVM machine...
	I0414 12:00:51.970482  559803 main.go:141] libmachine: (old-k8s-version-071646) creating network...
	I0414 12:00:51.972115  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found existing default KVM network
	I0414 12:00:51.973506  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:51.973326  560092 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:47:6e} reservation:<nil>}
	I0414 12:00:51.974710  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:51.974620  560092 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201c70}
	I0414 12:00:51.974741  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | created network xml: 
	I0414 12:00:51.974751  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | <network>
	I0414 12:00:51.974760  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |   <name>mk-old-k8s-version-071646</name>
	I0414 12:00:51.974780  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |   <dns enable='no'/>
	I0414 12:00:51.974790  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |   
	I0414 12:00:51.974802  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 12:00:51.974815  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |     <dhcp>
	I0414 12:00:51.974844  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 12:00:51.974862  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |     </dhcp>
	I0414 12:00:51.974871  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |   </ip>
	I0414 12:00:51.974877  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG |   
	I0414 12:00:51.974883  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | </network>
	I0414 12:00:51.974892  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | 
	I0414 12:00:51.980961  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | trying to create private KVM network mk-old-k8s-version-071646 192.168.50.0/24...
	I0414 12:00:52.076855  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | private KVM network mk-old-k8s-version-071646 192.168.50.0/24 created
	I0414 12:00:52.076894  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:52.076806  560092 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:52.076906  559803 main.go:141] libmachine: (old-k8s-version-071646) setting up store path in /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646 ...
	I0414 12:00:52.076925  559803 main.go:141] libmachine: (old-k8s-version-071646) building disk image from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:00:52.091408  559803 main.go:141] libmachine: (old-k8s-version-071646) Downloading /home/jenkins/minikube-integration/20534-503273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 12:00:52.405973  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:52.405807  560092 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa...
	I0414 12:00:52.487656  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:52.487524  560092 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/old-k8s-version-071646.rawdisk...
	I0414 12:00:52.487697  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | Writing magic tar header
	I0414 12:00:52.487710  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | Writing SSH key tar header
	I0414 12:00:52.487723  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:52.487678  560092 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646 ...
	I0414 12:00:52.487851  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646
	I0414 12:00:52.487886  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube/machines
	I0414 12:00:52.487901  559803 main.go:141] libmachine: (old-k8s-version-071646) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646 (perms=drwx------)
	I0414 12:00:52.487918  559803 main.go:141] libmachine: (old-k8s-version-071646) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube/machines (perms=drwxr-xr-x)
	I0414 12:00:52.487932  559803 main.go:141] libmachine: (old-k8s-version-071646) setting executable bit set on /home/jenkins/minikube-integration/20534-503273/.minikube (perms=drwxr-xr-x)
	I0414 12:00:52.487946  559803 main.go:141] libmachine: (old-k8s-version-071646) setting executable bit set on /home/jenkins/minikube-integration/20534-503273 (perms=drwxrwxr-x)
	I0414 12:00:52.487963  559803 main.go:141] libmachine: (old-k8s-version-071646) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 12:00:52.487976  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:00:52.487988  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20534-503273
	I0414 12:00:52.488000  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 12:00:52.488013  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home/jenkins
	I0414 12:00:52.488020  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | checking permissions on dir: /home
	I0414 12:00:52.488033  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | skipping /home - not owner
	I0414 12:00:52.488045  559803 main.go:141] libmachine: (old-k8s-version-071646) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 12:00:52.488053  559803 main.go:141] libmachine: (old-k8s-version-071646) creating domain...
	I0414 12:00:52.489146  559803 main.go:141] libmachine: (old-k8s-version-071646) define libvirt domain using xml: 
	I0414 12:00:52.489170  559803 main.go:141] libmachine: (old-k8s-version-071646) <domain type='kvm'>
	I0414 12:00:52.489180  559803 main.go:141] libmachine: (old-k8s-version-071646)   <name>old-k8s-version-071646</name>
	I0414 12:00:52.489192  559803 main.go:141] libmachine: (old-k8s-version-071646)   <memory unit='MiB'>2200</memory>
	I0414 12:00:52.489204  559803 main.go:141] libmachine: (old-k8s-version-071646)   <vcpu>2</vcpu>
	I0414 12:00:52.489214  559803 main.go:141] libmachine: (old-k8s-version-071646)   <features>
	I0414 12:00:52.489223  559803 main.go:141] libmachine: (old-k8s-version-071646)     <acpi/>
	I0414 12:00:52.489235  559803 main.go:141] libmachine: (old-k8s-version-071646)     <apic/>
	I0414 12:00:52.489246  559803 main.go:141] libmachine: (old-k8s-version-071646)     <pae/>
	I0414 12:00:52.489257  559803 main.go:141] libmachine: (old-k8s-version-071646)     
	I0414 12:00:52.489276  559803 main.go:141] libmachine: (old-k8s-version-071646)   </features>
	I0414 12:00:52.489288  559803 main.go:141] libmachine: (old-k8s-version-071646)   <cpu mode='host-passthrough'>
	I0414 12:00:52.489299  559803 main.go:141] libmachine: (old-k8s-version-071646)   
	I0414 12:00:52.489307  559803 main.go:141] libmachine: (old-k8s-version-071646)   </cpu>
	I0414 12:00:52.489316  559803 main.go:141] libmachine: (old-k8s-version-071646)   <os>
	I0414 12:00:52.489326  559803 main.go:141] libmachine: (old-k8s-version-071646)     <type>hvm</type>
	I0414 12:00:52.489335  559803 main.go:141] libmachine: (old-k8s-version-071646)     <boot dev='cdrom'/>
	I0414 12:00:52.489346  559803 main.go:141] libmachine: (old-k8s-version-071646)     <boot dev='hd'/>
	I0414 12:00:52.489366  559803 main.go:141] libmachine: (old-k8s-version-071646)     <bootmenu enable='no'/>
	I0414 12:00:52.489380  559803 main.go:141] libmachine: (old-k8s-version-071646)   </os>
	I0414 12:00:52.489388  559803 main.go:141] libmachine: (old-k8s-version-071646)   <devices>
	I0414 12:00:52.489394  559803 main.go:141] libmachine: (old-k8s-version-071646)     <disk type='file' device='cdrom'>
	I0414 12:00:52.489406  559803 main.go:141] libmachine: (old-k8s-version-071646)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/boot2docker.iso'/>
	I0414 12:00:52.489419  559803 main.go:141] libmachine: (old-k8s-version-071646)       <target dev='hdc' bus='scsi'/>
	I0414 12:00:52.489427  559803 main.go:141] libmachine: (old-k8s-version-071646)       <readonly/>
	I0414 12:00:52.489436  559803 main.go:141] libmachine: (old-k8s-version-071646)     </disk>
	I0414 12:00:52.489443  559803 main.go:141] libmachine: (old-k8s-version-071646)     <disk type='file' device='disk'>
	I0414 12:00:52.489451  559803 main.go:141] libmachine: (old-k8s-version-071646)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 12:00:52.489462  559803 main.go:141] libmachine: (old-k8s-version-071646)       <source file='/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/old-k8s-version-071646.rawdisk'/>
	I0414 12:00:52.489468  559803 main.go:141] libmachine: (old-k8s-version-071646)       <target dev='hda' bus='virtio'/>
	I0414 12:00:52.489472  559803 main.go:141] libmachine: (old-k8s-version-071646)     </disk>
	I0414 12:00:52.489479  559803 main.go:141] libmachine: (old-k8s-version-071646)     <interface type='network'>
	I0414 12:00:52.489488  559803 main.go:141] libmachine: (old-k8s-version-071646)       <source network='mk-old-k8s-version-071646'/>
	I0414 12:00:52.489506  559803 main.go:141] libmachine: (old-k8s-version-071646)       <model type='virtio'/>
	I0414 12:00:52.489520  559803 main.go:141] libmachine: (old-k8s-version-071646)     </interface>
	I0414 12:00:52.489526  559803 main.go:141] libmachine: (old-k8s-version-071646)     <interface type='network'>
	I0414 12:00:52.489533  559803 main.go:141] libmachine: (old-k8s-version-071646)       <source network='default'/>
	I0414 12:00:52.489542  559803 main.go:141] libmachine: (old-k8s-version-071646)       <model type='virtio'/>
	I0414 12:00:52.489550  559803 main.go:141] libmachine: (old-k8s-version-071646)     </interface>
	I0414 12:00:52.489557  559803 main.go:141] libmachine: (old-k8s-version-071646)     <serial type='pty'>
	I0414 12:00:52.489566  559803 main.go:141] libmachine: (old-k8s-version-071646)       <target port='0'/>
	I0414 12:00:52.489576  559803 main.go:141] libmachine: (old-k8s-version-071646)     </serial>
	I0414 12:00:52.489582  559803 main.go:141] libmachine: (old-k8s-version-071646)     <console type='pty'>
	I0414 12:00:52.489586  559803 main.go:141] libmachine: (old-k8s-version-071646)       <target type='serial' port='0'/>
	I0414 12:00:52.489599  559803 main.go:141] libmachine: (old-k8s-version-071646)     </console>
	I0414 12:00:52.489606  559803 main.go:141] libmachine: (old-k8s-version-071646)     <rng model='virtio'>
	I0414 12:00:52.489614  559803 main.go:141] libmachine: (old-k8s-version-071646)       <backend model='random'>/dev/random</backend>
	I0414 12:00:52.489624  559803 main.go:141] libmachine: (old-k8s-version-071646)     </rng>
	I0414 12:00:52.489632  559803 main.go:141] libmachine: (old-k8s-version-071646)     
	I0414 12:00:52.489641  559803 main.go:141] libmachine: (old-k8s-version-071646)     
	I0414 12:00:52.489648  559803 main.go:141] libmachine: (old-k8s-version-071646)   </devices>
	I0414 12:00:52.489657  559803 main.go:141] libmachine: (old-k8s-version-071646) </domain>
	I0414 12:00:52.489666  559803 main.go:141] libmachine: (old-k8s-version-071646) 
	I0414 12:00:52.495183  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:db:6b:41 in network default
	I0414 12:00:52.496298  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:52.496324  559803 main.go:141] libmachine: (old-k8s-version-071646) starting domain...
	I0414 12:00:52.496336  559803 main.go:141] libmachine: (old-k8s-version-071646) ensuring networks are active...
	I0414 12:00:52.497495  559803 main.go:141] libmachine: (old-k8s-version-071646) Ensuring network default is active
	I0414 12:00:52.497932  559803 main.go:141] libmachine: (old-k8s-version-071646) Ensuring network mk-old-k8s-version-071646 is active
	I0414 12:00:52.498860  559803 main.go:141] libmachine: (old-k8s-version-071646) getting domain XML...
	I0414 12:00:52.500087  559803 main.go:141] libmachine: (old-k8s-version-071646) creating domain...
	I0414 12:00:54.119731  559803 main.go:141] libmachine: (old-k8s-version-071646) waiting for IP...
	I0414 12:00:54.120822  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:54.121522  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:54.121544  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:54.121423  560092 retry.go:31] will retry after 240.252965ms: waiting for domain to come up
	I0414 12:00:54.364293  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:54.364987  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:54.365012  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:54.364903  560092 retry.go:31] will retry after 240.679772ms: waiting for domain to come up
	I0414 12:00:54.607419  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:54.608108  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:54.608139  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:54.608078  560092 retry.go:31] will retry after 409.815428ms: waiting for domain to come up
	I0414 12:00:55.020352  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:55.021012  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:55.021065  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:55.020981  560092 retry.go:31] will retry after 429.673599ms: waiting for domain to come up
	I0414 12:00:55.452125  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:55.452693  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:55.452720  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:55.452672  560092 retry.go:31] will retry after 485.496403ms: waiting for domain to come up
	I0414 12:00:55.940731  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:55.941457  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:55.941491  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:55.941347  560092 retry.go:31] will retry after 851.186199ms: waiting for domain to come up
	I0414 12:00:56.793820  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:56.794494  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:56.794523  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:56.794397  560092 retry.go:31] will retry after 732.142052ms: waiting for domain to come up
	I0414 12:00:57.528633  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:57.529195  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:57.529225  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:57.529163  560092 retry.go:31] will retry after 1.450060132s: waiting for domain to come up
	I0414 12:00:58.980461  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:00:58.981033  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:00:58.981061  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:00:58.980997  560092 retry.go:31] will retry after 1.819654451s: waiting for domain to come up
	I0414 12:01:00.802688  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:00.803365  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:01:00.803395  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:01:00.803302  560092 retry.go:31] will retry after 1.959807643s: waiting for domain to come up
	I0414 12:01:02.765487  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:02.766261  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:01:02.766291  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:01:02.766227  560092 retry.go:31] will retry after 2.839811228s: waiting for domain to come up
	I0414 12:01:05.608548  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:05.609225  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:01:05.609317  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:01:05.609204  560092 retry.go:31] will retry after 3.25224337s: waiting for domain to come up
	I0414 12:01:08.863003  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:08.863581  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:01:08.863612  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:01:08.863537  560092 retry.go:31] will retry after 3.900502301s: waiting for domain to come up
	I0414 12:01:12.765898  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:12.766436  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:01:12.766459  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:01:12.766418  560092 retry.go:31] will retry after 5.196457794s: waiting for domain to come up
	I0414 12:01:17.964836  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:17.965408  559803 main.go:141] libmachine: (old-k8s-version-071646) found domain IP: 192.168.50.226
	I0414 12:01:17.965437  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has current primary IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:17.965446  559803 main.go:141] libmachine: (old-k8s-version-071646) reserving static IP address...
	I0414 12:01:17.965759  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-071646", mac: "52:54:00:4b:45:78", ip: "192.168.50.226"} in network mk-old-k8s-version-071646
	I0414 12:01:18.046687  559803 main.go:141] libmachine: (old-k8s-version-071646) reserved static IP address 192.168.50.226 for domain old-k8s-version-071646
	I0414 12:01:18.046727  559803 main.go:141] libmachine: (old-k8s-version-071646) waiting for SSH...
	I0414 12:01:18.046737  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | Getting to WaitForSSH function...
	I0414 12:01:18.049842  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.050294  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.050322  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.050521  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | Using SSH client type: external
	I0414 12:01:18.050550  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa (-rw-------)
	I0414 12:01:18.050597  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:01:18.050614  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | About to run SSH command:
	I0414 12:01:18.050630  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | exit 0
	I0414 12:01:18.179673  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | SSH cmd err, output: <nil>: 
	I0414 12:01:18.179985  559803 main.go:141] libmachine: (old-k8s-version-071646) KVM machine creation complete
	I0414 12:01:18.180410  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetConfigRaw
	I0414 12:01:18.180974  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:18.181171  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:18.181351  559803 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 12:01:18.181377  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetState
	I0414 12:01:18.182877  559803 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 12:01:18.182894  559803 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 12:01:18.182901  559803 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 12:01:18.182910  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:18.185717  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.186116  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.186138  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.186312  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:18.186530  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.186697  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.186868  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:18.187072  559803 main.go:141] libmachine: Using SSH client type: native
	I0414 12:01:18.187399  559803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:01:18.187416  559803 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 12:01:18.298848  559803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:01:18.298887  559803 main.go:141] libmachine: Detecting the provisioner...
	I0414 12:01:18.298894  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:18.301867  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.302230  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.302277  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.302432  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:18.302668  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.302894  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.303038  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:18.303199  559803 main.go:141] libmachine: Using SSH client type: native
	I0414 12:01:18.303459  559803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:01:18.303471  559803 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 12:01:18.416081  559803 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 12:01:18.416176  559803 main.go:141] libmachine: found compatible host: buildroot
	I0414 12:01:18.416190  559803 main.go:141] libmachine: Provisioning with buildroot...
	I0414 12:01:18.416210  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:01:18.416456  559803 buildroot.go:166] provisioning hostname "old-k8s-version-071646"
	I0414 12:01:18.416494  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:01:18.416704  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:18.419507  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.419867  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.419895  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.420028  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:18.420225  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.420418  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.420540  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:18.420686  559803 main.go:141] libmachine: Using SSH client type: native
	I0414 12:01:18.420961  559803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:01:18.420975  559803 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-071646 && echo "old-k8s-version-071646" | sudo tee /etc/hostname
	I0414 12:01:18.542333  559803 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-071646
	
	I0414 12:01:18.542367  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:18.545554  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.546007  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.546070  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.546241  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:18.546456  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.546604  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.546730  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:18.546876  559803 main.go:141] libmachine: Using SSH client type: native
	I0414 12:01:18.547167  559803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:01:18.547186  559803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-071646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-071646/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-071646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:01:18.664116  559803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:01:18.664147  559803 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 12:01:18.664171  559803 buildroot.go:174] setting up certificates
	I0414 12:01:18.664183  559803 provision.go:84] configureAuth start
	I0414 12:01:18.664193  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:01:18.664510  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:01:18.667053  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.667443  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.667477  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.667596  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:18.670081  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.670399  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.670425  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.670597  559803 provision.go:143] copyHostCerts
	I0414 12:01:18.670658  559803 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 12:01:18.670683  559803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 12:01:18.670739  559803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 12:01:18.670869  559803 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 12:01:18.670883  559803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 12:01:18.670922  559803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 12:01:18.671001  559803 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 12:01:18.671010  559803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 12:01:18.671035  559803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 12:01:18.671112  559803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-071646 san=[127.0.0.1 192.168.50.226 localhost minikube old-k8s-version-071646]
	I0414 12:01:18.933474  559803 provision.go:177] copyRemoteCerts
	I0414 12:01:18.933541  559803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:01:18.933569  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:18.936418  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.936794  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:18.936827  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:18.936992  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:18.937231  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:18.937404  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:18.937521  559803 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:01:19.021274  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:01:19.047264  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 12:01:19.069437  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 12:01:19.092193  559803 provision.go:87] duration metric: took 427.996527ms to configureAuth
	I0414 12:01:19.092223  559803 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:01:19.092467  559803 config.go:182] Loaded profile config "old-k8s-version-071646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 12:01:19.092566  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:19.096693  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.097078  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.097109  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.097282  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:19.097484  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.097630  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.097764  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:19.097896  559803 main.go:141] libmachine: Using SSH client type: native
	I0414 12:01:19.098100  559803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:01:19.098114  559803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:01:19.335437  559803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:01:19.335475  559803 main.go:141] libmachine: Checking connection to Docker...
	I0414 12:01:19.335489  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetURL
	I0414 12:01:19.337009  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | using libvirt version 6000000
	I0414 12:01:19.339308  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.339831  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.339865  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.340003  559803 main.go:141] libmachine: Docker is up and running!
	I0414 12:01:19.340018  559803 main.go:141] libmachine: Reticulating splines...
	I0414 12:01:19.340028  559803 client.go:171] duration metric: took 27.371196632s to LocalClient.Create
	I0414 12:01:19.340063  559803 start.go:167] duration metric: took 27.371287854s to libmachine.API.Create "old-k8s-version-071646"
	I0414 12:01:19.340085  559803 start.go:293] postStartSetup for "old-k8s-version-071646" (driver="kvm2")
	I0414 12:01:19.340101  559803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:01:19.340123  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:19.340357  559803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:01:19.340386  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:19.342422  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.342723  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.342752  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.342922  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:19.343138  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.343342  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:19.343510  559803 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:01:19.425393  559803 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:01:19.429657  559803 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:01:19.429697  559803 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 12:01:19.429766  559803 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 12:01:19.429837  559803 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 12:01:19.429929  559803 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 12:01:19.439134  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:01:19.461776  559803 start.go:296] duration metric: took 121.651436ms for postStartSetup
	I0414 12:01:19.461850  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetConfigRaw
	I0414 12:01:19.462552  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:01:19.465461  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.465797  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.465836  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.466059  559803 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/config.json ...
	I0414 12:01:19.466248  559803 start.go:128] duration metric: took 27.521727216s to createHost
	I0414 12:01:19.466272  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:19.468727  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.469120  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.469164  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.469363  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:19.469567  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.469744  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.469930  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:19.470080  559803 main.go:141] libmachine: Using SSH client type: native
	I0414 12:01:19.470315  559803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:01:19.470329  559803 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:01:19.579821  559803 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744632079.552914772
	
	I0414 12:01:19.579869  559803 fix.go:216] guest clock: 1744632079.552914772
	I0414 12:01:19.579881  559803 fix.go:229] Guest: 2025-04-14 12:01:19.552914772 +0000 UTC Remote: 2025-04-14 12:01:19.466259652 +0000 UTC m=+32.985651789 (delta=86.65512ms)
	I0414 12:01:19.579911  559803 fix.go:200] guest clock delta is within tolerance: 86.65512ms
	I0414 12:01:19.579918  559803 start.go:83] releasing machines lock for "old-k8s-version-071646", held for 27.63560279s
	I0414 12:01:19.579972  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:19.580302  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:01:19.583875  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.584281  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.584350  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.584685  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:19.585232  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:19.585473  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:01:19.585584  559803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:01:19.585631  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:19.585754  559803 ssh_runner.go:195] Run: cat /version.json
	I0414 12:01:19.585786  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:01:19.589023  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.589207  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.589558  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.589585  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.589783  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:19.589881  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:19.589938  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:19.590009  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.590293  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:19.590315  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:01:19.590550  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:01:19.590557  559803 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:01:19.590701  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:01:19.590913  559803 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:01:19.676464  559803 ssh_runner.go:195] Run: systemctl --version
	I0414 12:01:19.697992  559803 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:01:19.858350  559803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:01:19.863845  559803 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:01:19.863921  559803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:01:19.879275  559803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:01:19.879328  559803 start.go:495] detecting cgroup driver to use...
	I0414 12:01:19.879390  559803 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:01:19.895365  559803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:01:19.909903  559803 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:01:19.909978  559803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:01:19.925545  559803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:01:19.939147  559803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:01:20.065224  559803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:01:20.232273  559803 docker.go:233] disabling docker service ...
	I0414 12:01:20.232354  559803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:01:20.247910  559803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:01:20.262854  559803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:01:20.417087  559803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:01:20.563887  559803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:01:20.578484  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:01:20.600389  559803 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 12:01:20.600472  559803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:01:20.611488  559803 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:01:20.611552  559803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:01:20.622577  559803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:01:20.636486  559803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:01:20.650652  559803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:01:20.666729  559803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:01:20.681696  559803 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:01:20.681761  559803 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:01:20.694808  559803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:01:20.705491  559803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:01:20.876947  559803 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:01:20.976430  559803 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:01:20.976506  559803 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:01:20.981917  559803 start.go:563] Will wait 60s for crictl version
	I0414 12:01:20.981982  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:20.986636  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:01:21.038088  559803 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:01:21.038191  559803 ssh_runner.go:195] Run: crio --version
	I0414 12:01:21.068647  559803 ssh_runner.go:195] Run: crio --version
	I0414 12:01:21.103490  559803 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 12:01:21.105089  559803 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:01:21.112455  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:21.112804  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:01:08 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:01:21.112845  559803 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:01:21.113113  559803 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 12:01:21.117368  559803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:01:21.131573  559803 kubeadm.go:883] updating cluster {Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:01:21.131697  559803 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:01:21.131770  559803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:01:21.172382  559803 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 12:01:21.172486  559803 ssh_runner.go:195] Run: which lz4
	I0414 12:01:21.179095  559803 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:01:21.183420  559803 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:01:21.183452  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 12:01:22.845554  559803 crio.go:462] duration metric: took 1.666497378s to copy over tarball
	I0414 12:01:22.845633  559803 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:01:25.889590  559803 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.043927983s)
	I0414 12:01:25.889627  559803 crio.go:469] duration metric: took 3.044039028s to extract the tarball
	I0414 12:01:25.889638  559803 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:01:25.936896  559803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:01:25.992250  559803 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 12:01:25.992291  559803 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 12:01:25.992386  559803 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:01:25.992692  559803 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:25.992732  559803 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:25.992833  559803 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 12:01:25.992961  559803 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:25.993108  559803 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:25.993175  559803 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 12:01:25.993283  559803 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:25.994247  559803 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:01:25.994278  559803 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:25.994251  559803 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 12:01:25.994326  559803 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:25.994712  559803 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 12:01:25.994743  559803 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:25.994820  559803 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:25.994882  559803 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:26.140711  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:26.159879  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:26.164262  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 12:01:26.185576  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:26.195757  559803 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 12:01:26.195933  559803 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:26.196007  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.197089  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:26.211427  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 12:01:26.219444  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:26.249404  559803 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 12:01:26.249455  559803 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:26.249505  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.320274  559803 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 12:01:26.320324  559803 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 12:01:26.320372  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.337983  559803 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 12:01:26.338038  559803 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:26.338091  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.338282  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:26.340601  559803 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 12:01:26.340645  559803 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:26.340683  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.340804  559803 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 12:01:26.340847  559803 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 12:01:26.340888  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.358131  559803 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 12:01:26.358188  559803 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:26.358207  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:26.358236  559803 ssh_runner.go:195] Run: which crictl
	I0414 12:01:26.358318  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 12:01:26.358415  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:26.416075  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:26.416099  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:26.416151  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 12:01:26.459047  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:26.459104  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:26.459156  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:26.459203  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 12:01:26.583099  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:01:26.583184  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 12:01:26.583195  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:26.632863  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:01:26.632921  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 12:01:26.633001  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 12:01:26.633031  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:26.730959  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:01:26.758457  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 12:01:26.768382  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 12:01:26.775023  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 12:01:26.775100  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 12:01:26.802334  559803 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:01:26.806736  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 12:01:26.843456  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 12:01:26.854450  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 12:01:26.859984  559803 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 12:01:27.710991  559803 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:01:27.865352  559803 cache_images.go:92] duration metric: took 1.873032847s to LoadCachedImages
	W0414 12:01:27.865480  559803 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0414 12:01:27.865498  559803 kubeadm.go:934] updating node { 192.168.50.226 8443 v1.20.0 crio true true} ...
	I0414 12:01:27.865636  559803 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-071646 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:01:27.865733  559803 ssh_runner.go:195] Run: crio config
	I0414 12:01:27.941820  559803 cni.go:84] Creating CNI manager for ""
	I0414 12:01:27.941852  559803 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:01:27.941869  559803 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 12:01:27.941894  559803 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.226 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-071646 NodeName:old-k8s-version-071646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 12:01:27.942066  559803 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-071646"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:01:27.942167  559803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 12:01:27.953997  559803 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:01:27.954072  559803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:01:27.967576  559803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 12:01:27.985731  559803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:01:28.001549  559803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 12:01:28.019344  559803 ssh_runner.go:195] Run: grep 192.168.50.226	control-plane.minikube.internal$ /etc/hosts
	I0414 12:01:28.022968  559803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:01:28.034670  559803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:01:28.184251  559803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:01:28.205259  559803 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646 for IP: 192.168.50.226
	I0414 12:01:28.205293  559803 certs.go:194] generating shared ca certs ...
	I0414 12:01:28.205316  559803 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:28.205617  559803 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 12:01:28.205690  559803 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 12:01:28.205708  559803 certs.go:256] generating profile certs ...
	I0414 12:01:28.205798  559803 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.key
	I0414 12:01:28.205829  559803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.crt with IP's: []
	I0414 12:01:28.877004  559803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.crt ...
	I0414 12:01:28.877062  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.crt: {Name:mk97e3057e6330014ac698ef5cd564edc8888182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:28.877324  559803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.key ...
	I0414 12:01:28.877351  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.key: {Name:mkc997fbd8fc86f4242a912135f990d8e09dd4c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:28.877503  559803 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key.a313a336
	I0414 12:01:28.877533  559803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt.a313a336 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.226]
	I0414 12:01:28.919467  559803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt.a313a336 ...
	I0414 12:01:28.919507  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt.a313a336: {Name:mk49753f16eac484900e13794d5195b9142acc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:28.919746  559803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key.a313a336 ...
	I0414 12:01:28.919773  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key.a313a336: {Name:mk6f9970f8a0e0e8fcbe6042bd7b0e58b95e882e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:28.919903  559803 certs.go:381] copying /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt.a313a336 -> /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt
	I0414 12:01:28.919981  559803 certs.go:385] copying /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key.a313a336 -> /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key
	I0414 12:01:28.920031  559803 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.key
	I0414 12:01:28.920047  559803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.crt with IP's: []
	I0414 12:01:29.306669  559803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.crt ...
	I0414 12:01:29.306705  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.crt: {Name:mk253622de3f31b86d19a799fe5fd46064274104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:29.306974  559803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.key ...
	I0414 12:01:29.306997  559803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.key: {Name:mk146227b0d98ad48b3dfbd94733368edc48f373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:01:29.307323  559803 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 12:01:29.307386  559803 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 12:01:29.307402  559803 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:01:29.307433  559803 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:01:29.307459  559803 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:01:29.307489  559803 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 12:01:29.307535  559803 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:01:29.308160  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:01:29.346897  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 12:01:29.370185  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:01:29.403601  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:01:29.439944  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 12:01:29.470166  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 12:01:29.501979  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:01:29.527240  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 12:01:29.552831  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 12:01:29.577625  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 12:01:29.602933  559803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:01:29.638587  559803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:01:29.664611  559803 ssh_runner.go:195] Run: openssl version
	I0414 12:01:29.671744  559803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 12:01:29.684078  559803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 12:01:29.690564  559803 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 12:01:29.690638  559803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 12:01:29.696963  559803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 12:01:29.708395  559803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 12:01:29.722587  559803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 12:01:29.730118  559803 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 12:01:29.730167  559803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 12:01:29.736270  559803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 12:01:29.748156  559803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:01:29.759657  559803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:01:29.764632  559803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:01:29.764710  559803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:01:29.771217  559803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:01:29.785156  559803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:01:29.790084  559803 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 12:01:29.790140  559803 kubeadm.go:392] StartCluster: {Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:01:29.790260  559803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:01:29.790335  559803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:01:29.840017  559803 cri.go:89] found id: ""
	I0414 12:01:29.840092  559803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:01:29.851242  559803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:01:29.861681  559803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:01:29.873396  559803 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:01:29.873421  559803 kubeadm.go:157] found existing configuration files:
	
	I0414 12:01:29.873470  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:01:29.883787  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:01:29.883843  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:01:29.894516  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:01:29.904504  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:01:29.904588  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:01:29.916894  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:01:29.927970  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:01:29.928025  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:01:29.938830  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:01:29.949335  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:01:29.949389  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:01:29.960149  559803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:01:30.108252  559803 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:01:30.108326  559803 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:01:30.307816  559803 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:01:30.307971  559803 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:01:30.308105  559803 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:01:30.545974  559803 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:01:30.548037  559803 out.go:235]   - Generating certificates and keys ...
	I0414 12:01:30.548148  559803 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:01:30.548236  559803 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:01:30.883096  559803 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 12:01:31.159741  559803 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 12:01:31.267151  559803 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 12:01:31.415083  559803 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 12:01:31.932768  559803 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 12:01:31.933193  559803 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-071646] and IPs [192.168.50.226 127.0.0.1 ::1]
	I0414 12:01:32.063746  559803 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 12:01:32.063947  559803 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-071646] and IPs [192.168.50.226 127.0.0.1 ::1]
	I0414 12:01:32.397801  559803 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 12:01:32.454615  559803 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 12:01:32.745626  559803 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 12:01:32.745871  559803 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:01:33.111370  559803 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:01:33.333433  559803 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:01:33.740388  559803 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:01:33.837170  559803 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:01:33.860506  559803 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:01:33.863770  559803 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:01:33.863839  559803 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:01:33.999090  559803 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:01:34.001268  559803 out.go:235]   - Booting up control plane ...
	I0414 12:01:34.001424  559803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:01:34.015073  559803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:01:34.016294  559803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:01:34.017080  559803 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:01:34.021096  559803 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:02:14.016309  559803 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:02:14.028227  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:02:14.028962  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:02:19.028473  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:02:19.028755  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:02:29.028184  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:02:29.028427  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:02:49.027856  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:02:49.028141  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:03:29.029663  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:03:29.029937  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:03:29.029985  559803 kubeadm.go:310] 
	I0414 12:03:29.030057  559803 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:03:29.030126  559803 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:03:29.030142  559803 kubeadm.go:310] 
	I0414 12:03:29.030173  559803 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:03:29.030218  559803 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:03:29.030329  559803 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:03:29.030338  559803 kubeadm.go:310] 
	I0414 12:03:29.030491  559803 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:03:29.030540  559803 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:03:29.030594  559803 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:03:29.030605  559803 kubeadm.go:310] 
	I0414 12:03:29.030725  559803 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:03:29.030845  559803 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:03:29.030858  559803 kubeadm.go:310] 
	I0414 12:03:29.031001  559803 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:03:29.031130  559803 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:03:29.031244  559803 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:03:29.031365  559803 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:03:29.031422  559803 kubeadm.go:310] 
	I0414 12:03:29.031568  559803 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:03:29.031718  559803 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:03:29.031880  559803 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 12:03:29.032037  559803 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-071646] and IPs [192.168.50.226 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-071646] and IPs [192.168.50.226 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-071646] and IPs [192.168.50.226 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-071646] and IPs [192.168.50.226 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 12:03:29.032081  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:03:31.361511  559803 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.329390411s)
	I0414 12:03:31.361607  559803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:03:31.376320  559803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:03:31.386724  559803 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:03:31.386745  559803 kubeadm.go:157] found existing configuration files:
	
	I0414 12:03:31.386787  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:03:31.396186  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:03:31.396257  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:03:31.405857  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:03:31.415305  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:03:31.415375  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:03:31.428176  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:03:31.437666  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:03:31.437749  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:03:31.448444  559803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:03:31.457687  559803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:03:31.457760  559803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:03:31.466917  559803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:03:31.693800  559803 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:05:28.123312  559803 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:05:28.123439  559803 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 12:05:28.125335  559803 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:05:28.125431  559803 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:05:28.125539  559803 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:05:28.125681  559803 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:05:28.125787  559803 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:05:28.125899  559803 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:05:28.127747  559803 out.go:235]   - Generating certificates and keys ...
	I0414 12:05:28.127846  559803 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:05:28.127935  559803 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:05:28.128138  559803 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:05:28.128215  559803 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:05:28.128303  559803 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:05:28.128367  559803 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:05:28.128427  559803 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:05:28.128512  559803 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:05:28.128608  559803 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:05:28.128714  559803 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:05:28.128773  559803 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:05:28.128868  559803 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:05:28.128938  559803 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:05:28.129025  559803 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:05:28.129141  559803 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:05:28.129232  559803 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:05:28.129378  559803 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:05:28.129497  559803 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:05:28.129579  559803 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:05:28.129688  559803 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:05:28.131233  559803 out.go:235]   - Booting up control plane ...
	I0414 12:05:28.131373  559803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:05:28.131467  559803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:05:28.131570  559803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:05:28.131662  559803 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:05:28.131797  559803 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:05:28.131864  559803 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:05:28.131940  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:05:28.132133  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:05:28.132241  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:05:28.132441  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:05:28.132510  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:05:28.132699  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:05:28.132762  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:05:28.132951  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:05:28.133053  559803 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:05:28.133289  559803 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:05:28.133296  559803 kubeadm.go:310] 
	I0414 12:05:28.133357  559803 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:05:28.133394  559803 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:05:28.133400  559803 kubeadm.go:310] 
	I0414 12:05:28.133428  559803 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:05:28.133456  559803 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:05:28.133539  559803 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:05:28.133550  559803 kubeadm.go:310] 
	I0414 12:05:28.133630  559803 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:05:28.133658  559803 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:05:28.133692  559803 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:05:28.133702  559803 kubeadm.go:310] 
	I0414 12:05:28.133806  559803 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:05:28.133949  559803 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:05:28.133966  559803 kubeadm.go:310] 
	I0414 12:05:28.134116  559803 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:05:28.134214  559803 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:05:28.134304  559803 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:05:28.134406  559803 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:05:28.134437  559803 kubeadm.go:310] 
	I0414 12:05:28.134488  559803 kubeadm.go:394] duration metric: took 3m58.34435091s to StartCluster
	I0414 12:05:28.134549  559803 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:05:28.134607  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:05:28.179948  559803 cri.go:89] found id: ""
	I0414 12:05:28.179986  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.179999  559803 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:05:28.180008  559803 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:05:28.180088  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:05:28.216372  559803 cri.go:89] found id: ""
	I0414 12:05:28.216414  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.216426  559803 logs.go:284] No container was found matching "etcd"
	I0414 12:05:28.216434  559803 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:05:28.216492  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:05:28.258115  559803 cri.go:89] found id: ""
	I0414 12:05:28.258147  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.258158  559803 logs.go:284] No container was found matching "coredns"
	I0414 12:05:28.258166  559803 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:05:28.258253  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:05:28.302042  559803 cri.go:89] found id: ""
	I0414 12:05:28.302076  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.302088  559803 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:05:28.302097  559803 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:05:28.302174  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:05:28.337886  559803 cri.go:89] found id: ""
	I0414 12:05:28.337921  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.337932  559803 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:05:28.337940  559803 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:05:28.338005  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:05:28.374898  559803 cri.go:89] found id: ""
	I0414 12:05:28.374934  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.374946  559803 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:05:28.374954  559803 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:05:28.375024  559803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:05:28.417999  559803 cri.go:89] found id: ""
	I0414 12:05:28.418143  559803 logs.go:282] 0 containers: []
	W0414 12:05:28.418160  559803 logs.go:284] No container was found matching "kindnet"
	I0414 12:05:28.418178  559803 logs.go:123] Gathering logs for kubelet ...
	I0414 12:05:28.418199  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:05:28.473745  559803 logs.go:123] Gathering logs for dmesg ...
	I0414 12:05:28.473797  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:05:28.493462  559803 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:05:28.493494  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:05:28.672846  559803 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:05:28.672883  559803 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:05:28.672902  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:05:28.784964  559803 logs.go:123] Gathering logs for container status ...
	I0414 12:05:28.785004  559803 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 12:05:28.826007  559803 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 12:05:28.826086  559803 out.go:270] * 
	* 
	W0414 12:05:28.826170  559803 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:05:28.826191  559803 out.go:270] * 
	* 
	W0414 12:05:28.827098  559803 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 12:05:28.830806  559803 out.go:201] 
	W0414 12:05:28.831988  559803 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:05:28.832056  559803 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 12:05:28.832085  559803 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 12:05:28.833547  559803 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-071646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 6 (247.589404ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 12:05:29.135584  566479 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-071646" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-071646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (282.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-071646 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-071646 create -f testdata/busybox.yaml: exit status 1 (47.922048ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-071646" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-071646 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 6 (259.390537ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 12:05:29.442001  566518 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-071646" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-071646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 6 (247.959947ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 12:05:29.694419  566548 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-071646" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-071646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-071646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-071646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m53.564698832s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-071646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-071646 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-071646 describe deploy/metrics-server -n kube-system: exit status 1 (51.41905ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-071646" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-071646 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 6 (243.136322ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 12:07:23.554853  567259 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-071646" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-071646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-071646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 12:07:32.947283  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:34.319507  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:42.491470  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:53.916476  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:08:13.908735  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:08:14.426252  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:08:39.644025  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:08:40.356780  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:08:42.130573  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:08:51.499152  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:09:19.202186  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:09:35.830330  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:09:58.632069  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-071646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m25.845600276s)

                                                
                                                
-- stdout --
	* [old-k8s-version-071646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-071646" primary control-plane node in "old-k8s-version-071646" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-071646" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 12:07:25.134547  567375 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:07:25.134828  567375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:07:25.134837  567375 out.go:358] Setting ErrFile to fd 2...
	I0414 12:07:25.134841  567375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:07:25.135120  567375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:07:25.135777  567375 out.go:352] Setting JSON to false
	I0414 12:07:25.136872  567375 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20996,"bootTime":1744611449,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:07:25.136986  567375 start.go:139] virtualization: kvm guest
	I0414 12:07:25.138916  567375 out.go:177] * [old-k8s-version-071646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:07:25.140232  567375 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:07:25.140245  567375 notify.go:220] Checking for updates...
	I0414 12:07:25.142351  567375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:07:25.143503  567375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:07:25.144620  567375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:07:25.145810  567375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:07:25.146774  567375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:07:25.148343  567375 config.go:182] Loaded profile config "old-k8s-version-071646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 12:07:25.148938  567375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:07:25.149010  567375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:07:25.165019  567375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I0414 12:07:25.165517  567375 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:07:25.166069  567375 main.go:141] libmachine: Using API Version  1
	I0414 12:07:25.166095  567375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:07:25.166488  567375 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:07:25.166712  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:25.168267  567375 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 12:07:25.169241  567375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:07:25.169522  567375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:07:25.169557  567375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:07:25.185891  567375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0414 12:07:25.186307  567375 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:07:25.186730  567375 main.go:141] libmachine: Using API Version  1
	I0414 12:07:25.186756  567375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:07:25.187145  567375 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:07:25.187372  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:25.223810  567375 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:07:25.224923  567375 start.go:297] selected driver: kvm2
	I0414 12:07:25.224942  567375 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:07:25.225069  567375 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:07:25.225841  567375 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:07:25.225916  567375 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:07:25.241205  567375 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:07:25.241613  567375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:07:25.241651  567375 cni.go:84] Creating CNI manager for ""
	I0414 12:07:25.241691  567375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:07:25.241725  567375 start.go:340] cluster config:
	{Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-071646 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:07:25.241825  567375 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:07:25.243954  567375 out.go:177] * Starting "old-k8s-version-071646" primary control-plane node in "old-k8s-version-071646" cluster
	I0414 12:07:25.245034  567375 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:07:25.245076  567375 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 12:07:25.245088  567375 cache.go:56] Caching tarball of preloaded images
	I0414 12:07:25.245182  567375 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:07:25.245193  567375 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 12:07:25.245282  567375 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/config.json ...
	I0414 12:07:25.245459  567375 start.go:360] acquireMachinesLock for old-k8s-version-071646: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:07:25.245499  567375 start.go:364] duration metric: took 23.955µs to acquireMachinesLock for "old-k8s-version-071646"
	I0414 12:07:25.245512  567375 start.go:96] Skipping create...Using existing machine configuration
	I0414 12:07:25.245520  567375 fix.go:54] fixHost starting: 
	I0414 12:07:25.245800  567375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:07:25.245836  567375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:07:25.261638  567375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0414 12:07:25.262197  567375 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:07:25.262618  567375 main.go:141] libmachine: Using API Version  1
	I0414 12:07:25.262647  567375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:07:25.263087  567375 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:07:25.263278  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:25.263481  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetState
	I0414 12:07:25.265234  567375 fix.go:112] recreateIfNeeded on old-k8s-version-071646: state=Stopped err=<nil>
	I0414 12:07:25.265278  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	W0414 12:07:25.265473  567375 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 12:07:25.267016  567375 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-071646" ...
	I0414 12:07:25.268063  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .Start
	I0414 12:07:25.268274  567375 main.go:141] libmachine: (old-k8s-version-071646) starting domain...
	I0414 12:07:25.268300  567375 main.go:141] libmachine: (old-k8s-version-071646) ensuring networks are active...
	I0414 12:07:25.269347  567375 main.go:141] libmachine: (old-k8s-version-071646) Ensuring network default is active
	I0414 12:07:25.269749  567375 main.go:141] libmachine: (old-k8s-version-071646) Ensuring network mk-old-k8s-version-071646 is active
	I0414 12:07:25.270166  567375 main.go:141] libmachine: (old-k8s-version-071646) getting domain XML...
	I0414 12:07:25.271157  567375 main.go:141] libmachine: (old-k8s-version-071646) creating domain...
	I0414 12:07:26.546371  567375 main.go:141] libmachine: (old-k8s-version-071646) waiting for IP...
	I0414 12:07:26.547260  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:26.547780  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:26.547876  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:26.547770  567410 retry.go:31] will retry after 220.621626ms: waiting for domain to come up
	I0414 12:07:26.770539  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:26.771278  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:26.771336  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:26.771232  567410 retry.go:31] will retry after 383.19831ms: waiting for domain to come up
	I0414 12:07:27.155691  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:27.156302  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:27.156359  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:27.156284  567410 retry.go:31] will retry after 371.382596ms: waiting for domain to come up
	I0414 12:07:27.528813  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:27.529361  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:27.529392  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:27.529306  567410 retry.go:31] will retry after 506.737623ms: waiting for domain to come up
	I0414 12:07:28.037868  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:28.038478  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:28.038510  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:28.038435  567410 retry.go:31] will retry after 492.288118ms: waiting for domain to come up
	I0414 12:07:28.532663  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:28.533213  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:28.533240  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:28.533171  567410 retry.go:31] will retry after 660.578666ms: waiting for domain to come up
	I0414 12:07:29.195204  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:29.195837  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:29.195867  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:29.195797  567410 retry.go:31] will retry after 1.099979624s: waiting for domain to come up
	I0414 12:07:30.297314  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:30.297797  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:30.297821  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:30.297777  567410 retry.go:31] will retry after 1.223345399s: waiting for domain to come up
	I0414 12:07:31.523125  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:31.523630  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:31.523653  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:31.523598  567410 retry.go:31] will retry after 1.310525471s: waiting for domain to come up
	I0414 12:07:32.836031  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:32.836638  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:32.836664  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:32.836603  567410 retry.go:31] will retry after 2.12021088s: waiting for domain to come up
	I0414 12:07:34.958449  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:34.958961  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:34.958994  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:34.958915  567410 retry.go:31] will retry after 1.80463693s: waiting for domain to come up
	I0414 12:07:36.765554  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:36.766164  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:36.766232  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:36.766151  567410 retry.go:31] will retry after 2.525211901s: waiting for domain to come up
	I0414 12:07:39.292725  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:39.293262  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | unable to find current IP address of domain old-k8s-version-071646 in network mk-old-k8s-version-071646
	I0414 12:07:39.293283  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | I0414 12:07:39.293222  567410 retry.go:31] will retry after 3.812793735s: waiting for domain to come up
	I0414 12:07:43.108488  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.109058  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has current primary IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.109089  567375 main.go:141] libmachine: (old-k8s-version-071646) found domain IP: 192.168.50.226
	I0414 12:07:43.109135  567375 main.go:141] libmachine: (old-k8s-version-071646) reserving static IP address...
	I0414 12:07:43.109605  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "old-k8s-version-071646", mac: "52:54:00:4b:45:78", ip: "192.168.50.226"} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.109632  567375 main.go:141] libmachine: (old-k8s-version-071646) reserved static IP address 192.168.50.226 for domain old-k8s-version-071646
	I0414 12:07:43.109651  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | skip adding static IP to network mk-old-k8s-version-071646 - found existing host DHCP lease matching {name: "old-k8s-version-071646", mac: "52:54:00:4b:45:78", ip: "192.168.50.226"}
	I0414 12:07:43.109663  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | Getting to WaitForSSH function...
	I0414 12:07:43.109679  567375 main.go:141] libmachine: (old-k8s-version-071646) waiting for SSH...
	I0414 12:07:43.111879  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.112264  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.112319  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.112401  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | Using SSH client type: external
	I0414 12:07:43.112428  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa (-rw-------)
	I0414 12:07:43.112479  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:07:43.112489  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | About to run SSH command:
	I0414 12:07:43.112515  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | exit 0
	I0414 12:07:43.236098  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | SSH cmd err, output: <nil>: 
	I0414 12:07:43.236621  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetConfigRaw
	I0414 12:07:43.237345  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:07:43.240188  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.240572  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.240616  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.240865  567375 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/config.json ...
	I0414 12:07:43.241103  567375 machine.go:93] provisionDockerMachine start ...
	I0414 12:07:43.241123  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:43.241360  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:43.243877  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.244207  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.244243  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.244393  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:43.244588  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.244724  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.244847  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:43.244979  567375 main.go:141] libmachine: Using SSH client type: native
	I0414 12:07:43.245261  567375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:07:43.245275  567375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 12:07:43.355692  567375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 12:07:43.355732  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:07:43.356057  567375 buildroot.go:166] provisioning hostname "old-k8s-version-071646"
	I0414 12:07:43.356088  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:07:43.356336  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:43.359561  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.359907  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.359939  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.360080  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:43.360292  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.360450  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.360569  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:43.360727  567375 main.go:141] libmachine: Using SSH client type: native
	I0414 12:07:43.360948  567375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:07:43.360960  567375 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-071646 && echo "old-k8s-version-071646" | sudo tee /etc/hostname
	I0414 12:07:43.483269  567375 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-071646
	
	I0414 12:07:43.483334  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:43.486551  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.486981  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.487012  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.487254  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:43.487486  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.487693  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.487916  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:43.488109  567375 main.go:141] libmachine: Using SSH client type: native
	I0414 12:07:43.488382  567375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:07:43.488406  567375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-071646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-071646/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-071646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:07:43.603937  567375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:07:43.603973  567375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 12:07:43.604015  567375 buildroot.go:174] setting up certificates
	I0414 12:07:43.604027  567375 provision.go:84] configureAuth start
	I0414 12:07:43.604041  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetMachineName
	I0414 12:07:43.604385  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:07:43.607074  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.607593  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.607621  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.607794  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:43.610140  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.610468  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.610502  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.610661  567375 provision.go:143] copyHostCerts
	I0414 12:07:43.610715  567375 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 12:07:43.610734  567375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 12:07:43.610812  567375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 12:07:43.610953  567375 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 12:07:43.610965  567375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 12:07:43.610991  567375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 12:07:43.611060  567375 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 12:07:43.611067  567375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 12:07:43.611087  567375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 12:07:43.611146  567375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-071646 san=[127.0.0.1 192.168.50.226 localhost minikube old-k8s-version-071646]
	I0414 12:07:43.811852  567375 provision.go:177] copyRemoteCerts
	I0414 12:07:43.811935  567375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:07:43.811970  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:43.815443  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.815819  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.815855  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.816096  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:43.816314  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.816440  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:43.816570  567375 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:07:43.899808  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:07:43.923836  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 12:07:43.946962  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 12:07:43.971705  567375 provision.go:87] duration metric: took 367.659566ms to configureAuth
	I0414 12:07:43.971742  567375 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:07:43.972004  567375 config.go:182] Loaded profile config "old-k8s-version-071646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 12:07:43.972089  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:43.975209  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.975566  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:43.975607  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:43.975825  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:43.976061  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.976242  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:43.976392  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:43.976549  567375 main.go:141] libmachine: Using SSH client type: native
	I0414 12:07:43.976772  567375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:07:43.976793  567375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:07:44.200625  567375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:07:44.200654  567375 machine.go:96] duration metric: took 959.537236ms to provisionDockerMachine
	I0414 12:07:44.200667  567375 start.go:293] postStartSetup for "old-k8s-version-071646" (driver="kvm2")
	I0414 12:07:44.200681  567375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:07:44.200704  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:44.201058  567375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:07:44.201129  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:44.204217  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.204616  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:44.204654  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.204845  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:44.205041  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:44.205219  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:44.205365  567375 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:07:44.294767  567375 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:07:44.299055  567375 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:07:44.299081  567375 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 12:07:44.299152  567375 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 12:07:44.299236  567375 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 12:07:44.299376  567375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 12:07:44.308519  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:07:44.331827  567375 start.go:296] duration metric: took 131.126606ms for postStartSetup
	I0414 12:07:44.331876  567375 fix.go:56] duration metric: took 19.086356029s for fixHost
	I0414 12:07:44.331901  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:44.334997  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.335486  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:44.335513  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.335724  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:44.335930  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:44.336088  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:44.336209  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:44.336373  567375 main.go:141] libmachine: Using SSH client type: native
	I0414 12:07:44.336584  567375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0414 12:07:44.336596  567375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:07:44.443770  567375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744632464.418446770
	
	I0414 12:07:44.443814  567375 fix.go:216] guest clock: 1744632464.418446770
	I0414 12:07:44.443826  567375 fix.go:229] Guest: 2025-04-14 12:07:44.41844677 +0000 UTC Remote: 2025-04-14 12:07:44.331880919 +0000 UTC m=+19.235576293 (delta=86.565851ms)
	I0414 12:07:44.443879  567375 fix.go:200] guest clock delta is within tolerance: 86.565851ms
	I0414 12:07:44.443886  567375 start.go:83] releasing machines lock for "old-k8s-version-071646", held for 19.198378862s
	I0414 12:07:44.443919  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:44.444225  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:07:44.447350  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.447797  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:44.447840  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.448063  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:44.448715  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:44.448917  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .DriverName
	I0414 12:07:44.449026  567375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:07:44.449095  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:44.449183  567375 ssh_runner.go:195] Run: cat /version.json
	I0414 12:07:44.449214  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHHostname
	I0414 12:07:44.452169  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.452480  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.452592  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:44.452641  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.452794  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:44.452864  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:44.452895  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:44.453031  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:44.453249  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:44.453268  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHPort
	I0414 12:07:44.453410  567375 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:07:44.453482  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHKeyPath
	I0414 12:07:44.453651  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetSSHUsername
	I0414 12:07:44.453811  567375 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/old-k8s-version-071646/id_rsa Username:docker}
	I0414 12:07:44.568862  567375 ssh_runner.go:195] Run: systemctl --version
	I0414 12:07:44.575005  567375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:07:44.719552  567375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:07:44.725638  567375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:07:44.725744  567375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:07:44.743467  567375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:07:44.743494  567375 start.go:495] detecting cgroup driver to use...
	I0414 12:07:44.743559  567375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:07:44.759978  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:07:44.774687  567375 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:07:44.774749  567375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:07:44.788902  567375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:07:44.802484  567375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:07:44.928653  567375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:07:45.080274  567375 docker.go:233] disabling docker service ...
	I0414 12:07:45.080335  567375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:07:45.094662  567375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:07:45.109081  567375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:07:45.248712  567375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:07:45.363639  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:07:45.378269  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:07:45.397743  567375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 12:07:45.397833  567375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:07:45.408411  567375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:07:45.408501  567375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:07:45.418558  567375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:07:45.428605  567375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:07:45.441451  567375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:07:45.453897  567375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:07:45.464690  567375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:07:45.464760  567375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:07:45.477264  567375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:07:45.486876  567375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:07:45.611284  567375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:07:45.703330  567375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:07:45.703425  567375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:07:45.708698  567375 start.go:563] Will wait 60s for crictl version
	I0414 12:07:45.708760  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:45.712488  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:07:45.752001  567375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:07:45.752094  567375 ssh_runner.go:195] Run: crio --version
	I0414 12:07:45.780158  567375 ssh_runner.go:195] Run: crio --version
	I0414 12:07:45.812199  567375 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 12:07:45.813453  567375 main.go:141] libmachine: (old-k8s-version-071646) Calling .GetIP
	I0414 12:07:45.816590  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:45.817081  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:45:78", ip: ""} in network mk-old-k8s-version-071646: {Iface:virbr2 ExpiryTime:2025-04-14 13:07:36 +0000 UTC Type:0 Mac:52:54:00:4b:45:78 Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:old-k8s-version-071646 Clientid:01:52:54:00:4b:45:78}
	I0414 12:07:45.817127  567375 main.go:141] libmachine: (old-k8s-version-071646) DBG | domain old-k8s-version-071646 has defined IP address 192.168.50.226 and MAC address 52:54:00:4b:45:78 in network mk-old-k8s-version-071646
	I0414 12:07:45.817440  567375 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 12:07:45.821654  567375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:07:45.834115  567375 kubeadm.go:883] updating cluster {Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:07:45.834280  567375 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:07:45.834329  567375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:07:45.879643  567375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 12:07:45.879720  567375 ssh_runner.go:195] Run: which lz4
	I0414 12:07:45.883634  567375 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:07:45.888847  567375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:07:45.888898  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 12:07:47.316153  567375 crio.go:462] duration metric: took 1.432550009s to copy over tarball
	I0414 12:07:47.316267  567375 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:07:50.319981  567375 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.00366983s)
	I0414 12:07:50.320030  567375 crio.go:469] duration metric: took 3.003837756s to extract the tarball
	I0414 12:07:50.320042  567375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:07:50.362546  567375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:07:50.399371  567375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 12:07:50.399404  567375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 12:07:50.399482  567375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:07:50.399564  567375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:50.399581  567375 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 12:07:50.399494  567375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:50.399519  567375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:50.399535  567375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:50.399545  567375 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 12:07:50.399551  567375 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:50.401547  567375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:50.401565  567375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:50.401549  567375 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 12:07:50.401551  567375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:50.401550  567375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:50.401546  567375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:50.401553  567375 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 12:07:50.401553  567375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:07:50.546320  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:50.549636  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:50.554004  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:50.572821  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:50.572822  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 12:07:50.582909  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:50.604470  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 12:07:50.628266  567375 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 12:07:50.628316  567375 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:50.628365  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.650601  567375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 12:07:50.650657  567375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:50.650715  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.682362  567375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 12:07:50.682419  567375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:50.682468  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.711453  567375 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 12:07:50.711509  567375 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 12:07:50.711564  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.734092  567375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 12:07:50.734150  567375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:50.734202  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.734092  567375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 12:07:50.734246  567375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:50.734305  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.735619  567375 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 12:07:50.735651  567375 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 12:07:50.735691  567375 ssh_runner.go:195] Run: which crictl
	I0414 12:07:50.735715  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:50.735759  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:50.735800  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:50.735840  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 12:07:50.747695  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:50.747754  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:50.753061  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 12:07:50.866723  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:50.910673  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:50.910724  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:50.910800  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 12:07:50.916672  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:50.917116  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:50.929885  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 12:07:51.011962  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 12:07:51.052883  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 12:07:51.067796  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 12:07:51.067867  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 12:07:51.089938  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 12:07:51.090018  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 12:07:51.095784  567375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 12:07:51.153446  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 12:07:51.180699  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 12:07:51.190262  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 12:07:51.207043  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 12:07:51.221444  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 12:07:51.230397  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 12:07:51.234010  567375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 12:07:51.870883  567375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:07:52.014642  567375 cache_images.go:92] duration metric: took 1.615219029s to LoadCachedImages
	W0414 12:07:52.014798  567375 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20534-503273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0414 12:07:52.014826  567375 kubeadm.go:934] updating node { 192.168.50.226 8443 v1.20.0 crio true true} ...
	I0414 12:07:52.014929  567375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-071646 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:07:52.014999  567375 ssh_runner.go:195] Run: crio config
	I0414 12:07:52.060998  567375 cni.go:84] Creating CNI manager for ""
	I0414 12:07:52.061023  567375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:07:52.061037  567375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 12:07:52.061057  567375 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.226 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-071646 NodeName:old-k8s-version-071646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 12:07:52.061224  567375 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-071646"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:07:52.061323  567375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 12:07:52.070996  567375 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:07:52.071077  567375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:07:52.080199  567375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 12:07:52.097307  567375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:07:52.116094  567375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 12:07:52.133126  567375 ssh_runner.go:195] Run: grep 192.168.50.226	control-plane.minikube.internal$ /etc/hosts
	I0414 12:07:52.137342  567375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:07:52.149573  567375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:07:52.279537  567375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:07:52.297161  567375 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646 for IP: 192.168.50.226
	I0414 12:07:52.297189  567375 certs.go:194] generating shared ca certs ...
	I0414 12:07:52.297213  567375 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:07:52.297465  567375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 12:07:52.297518  567375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 12:07:52.297534  567375 certs.go:256] generating profile certs ...
	I0414 12:07:52.297661  567375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/client.key
	I0414 12:07:52.297740  567375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key.a313a336
	I0414 12:07:52.297797  567375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.key
	I0414 12:07:52.297939  567375 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 12:07:52.297984  567375 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 12:07:52.297998  567375 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:07:52.298031  567375 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:07:52.298064  567375 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:07:52.298093  567375 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 12:07:52.298161  567375 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:07:52.298858  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:07:52.331946  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 12:07:52.367618  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:07:52.401005  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:07:52.427993  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 12:07:52.455644  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 12:07:52.486347  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:07:52.517816  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/old-k8s-version-071646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 12:07:52.551838  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 12:07:52.578501  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 12:07:52.609893  567375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:07:52.643890  567375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:07:52.673648  567375 ssh_runner.go:195] Run: openssl version
	I0414 12:07:52.679596  567375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:07:52.690738  567375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:07:52.694858  567375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:07:52.694932  567375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:07:52.700631  567375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:07:52.710531  567375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 12:07:52.720870  567375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 12:07:52.725161  567375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 12:07:52.725215  567375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 12:07:52.730506  567375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 12:07:52.740456  567375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 12:07:52.750436  567375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 12:07:52.754546  567375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 12:07:52.754607  567375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 12:07:52.760068  567375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 12:07:52.770387  567375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:07:52.774697  567375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 12:07:52.780807  567375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 12:07:52.787125  567375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 12:07:52.794323  567375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 12:07:52.800479  567375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 12:07:52.806398  567375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 12:07:52.812563  567375 kubeadm.go:392] StartCluster: {Name:old-k8s-version-071646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-071646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:07:52.812675  567375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:07:52.812749  567375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:07:52.850744  567375 cri.go:89] found id: ""
	I0414 12:07:52.850826  567375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:07:52.862016  567375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 12:07:52.862040  567375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 12:07:52.862110  567375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 12:07:52.873428  567375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 12:07:52.874329  567375 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-071646" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:07:52.874932  567375 kubeconfig.go:62] /home/jenkins/minikube-integration/20534-503273/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-071646" cluster setting kubeconfig missing "old-k8s-version-071646" context setting]
	I0414 12:07:52.875845  567375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:07:52.877577  567375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 12:07:52.888225  567375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.226
	I0414 12:07:52.888264  567375 kubeadm.go:1160] stopping kube-system containers ...
	I0414 12:07:52.888279  567375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 12:07:52.888332  567375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:07:52.924545  567375 cri.go:89] found id: ""
	I0414 12:07:52.924637  567375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 12:07:52.942410  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:07:52.952108  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:07:52.952133  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:07:52.952188  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:07:52.961327  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:07:52.961387  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:07:52.970559  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:07:52.979161  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:07:52.979219  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:07:52.988445  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:07:52.997097  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:07:52.997165  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:07:53.006236  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:07:53.017456  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:07:53.017528  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:07:53.036689  567375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:07:53.046830  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:07:53.424150  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:07:54.373160  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:07:54.615862  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:07:54.714332  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:07:54.809917  567375 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:07:54.810022  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:55.311050  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:55.810170  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:56.310161  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:56.811111  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:57.310957  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:57.810351  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:58.310454  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:58.810513  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:59.310850  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:07:59.810187  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:00.310310  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:00.810728  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:01.310114  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:01.810983  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:02.310823  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:02.810721  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:03.310559  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:03.810465  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:04.310692  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:04.810904  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:05.310313  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:05.810145  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:06.311082  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:06.810368  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:07.310240  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:07.810256  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:08.310531  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:08.810736  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:09.311031  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:09.810316  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:10.311046  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:10.810852  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:11.310936  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:11.810247  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:12.310166  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:12.810435  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:13.310292  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:13.810348  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:14.310732  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:14.810796  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:15.310292  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:15.810135  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:16.310673  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:16.810350  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:17.310960  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:17.810170  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:18.310547  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:18.810759  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:19.310587  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:19.810759  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:20.310064  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:20.810848  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:21.310624  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:21.810425  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:22.310189  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:22.810638  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:23.310220  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:23.810711  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:24.310188  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:24.810885  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:25.310503  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:25.810520  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:26.310420  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:26.810733  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:27.310652  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:27.810163  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:28.310388  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:28.811176  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:29.310585  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:29.810240  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:30.310441  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:30.811036  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:31.310851  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:31.811075  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:32.310293  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:32.810176  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:33.310759  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:33.811097  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:34.310887  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:34.811156  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:35.310277  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:35.810554  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:36.310872  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:36.810220  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:37.311003  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:37.811097  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:38.310908  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:38.810562  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:39.310364  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:39.810937  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:40.310395  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:40.810589  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:41.310442  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:41.810682  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:42.310934  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:42.810947  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:43.310503  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:43.810864  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:44.310879  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:44.810155  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:45.310580  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:45.810885  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:46.310274  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:46.811080  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:47.311001  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:47.810860  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:48.310286  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:48.811150  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:49.310295  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:49.810823  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:50.310485  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:50.810794  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:51.310238  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:51.810938  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:52.310224  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:52.810835  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:53.310717  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:53.811103  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:54.310568  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:54.810780  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:08:54.810890  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:08:54.854267  567375 cri.go:89] found id: ""
	I0414 12:08:54.854291  567375 logs.go:282] 0 containers: []
	W0414 12:08:54.854299  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:08:54.854304  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:08:54.854367  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:08:54.889956  567375 cri.go:89] found id: ""
	I0414 12:08:54.889983  567375 logs.go:282] 0 containers: []
	W0414 12:08:54.889994  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:08:54.890001  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:08:54.890062  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:08:54.923053  567375 cri.go:89] found id: ""
	I0414 12:08:54.923098  567375 logs.go:282] 0 containers: []
	W0414 12:08:54.923109  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:08:54.923117  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:08:54.923188  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:08:54.955890  567375 cri.go:89] found id: ""
	I0414 12:08:54.955920  567375 logs.go:282] 0 containers: []
	W0414 12:08:54.955932  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:08:54.955939  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:08:54.956005  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:08:54.989399  567375 cri.go:89] found id: ""
	I0414 12:08:54.989434  567375 logs.go:282] 0 containers: []
	W0414 12:08:54.989446  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:08:54.989454  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:08:54.989523  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:08:55.026532  567375 cri.go:89] found id: ""
	I0414 12:08:55.026561  567375 logs.go:282] 0 containers: []
	W0414 12:08:55.026574  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:08:55.026582  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:08:55.026673  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:08:55.059337  567375 cri.go:89] found id: ""
	I0414 12:08:55.059365  567375 logs.go:282] 0 containers: []
	W0414 12:08:55.059377  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:08:55.059407  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:08:55.059470  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:08:55.091526  567375 cri.go:89] found id: ""
	I0414 12:08:55.091558  567375 logs.go:282] 0 containers: []
	W0414 12:08:55.091568  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:08:55.091578  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:08:55.091590  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:08:55.144478  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:08:55.144512  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:08:55.158094  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:08:55.158136  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:08:55.277887  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:08:55.277916  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:08:55.277929  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:08:55.353943  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:08:55.353982  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:08:57.892522  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:08:57.907116  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:08:57.907207  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:08:57.948306  567375 cri.go:89] found id: ""
	I0414 12:08:57.948340  567375 logs.go:282] 0 containers: []
	W0414 12:08:57.948351  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:08:57.948359  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:08:57.948423  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:08:57.983883  567375 cri.go:89] found id: ""
	I0414 12:08:57.983923  567375 logs.go:282] 0 containers: []
	W0414 12:08:57.983936  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:08:57.983946  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:08:57.984032  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:08:58.020988  567375 cri.go:89] found id: ""
	I0414 12:08:58.021016  567375 logs.go:282] 0 containers: []
	W0414 12:08:58.021029  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:08:58.021036  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:08:58.021123  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:08:58.058582  567375 cri.go:89] found id: ""
	I0414 12:08:58.058615  567375 logs.go:282] 0 containers: []
	W0414 12:08:58.058624  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:08:58.058630  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:08:58.058701  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:08:58.091025  567375 cri.go:89] found id: ""
	I0414 12:08:58.091064  567375 logs.go:282] 0 containers: []
	W0414 12:08:58.091077  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:08:58.091085  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:08:58.091165  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:08:58.123093  567375 cri.go:89] found id: ""
	I0414 12:08:58.123133  567375 logs.go:282] 0 containers: []
	W0414 12:08:58.123145  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:08:58.123155  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:08:58.123216  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:08:58.164308  567375 cri.go:89] found id: ""
	I0414 12:08:58.164343  567375 logs.go:282] 0 containers: []
	W0414 12:08:58.164355  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:08:58.164364  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:08:58.164437  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:08:58.212044  567375 cri.go:89] found id: ""
	I0414 12:08:58.212092  567375 logs.go:282] 0 containers: []
	W0414 12:08:58.212106  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:08:58.212119  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:08:58.212134  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:08:58.283065  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:08:58.283115  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:08:58.310334  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:08:58.310372  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:08:58.386374  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:08:58.386394  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:08:58.386412  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:08:58.460556  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:08:58.460604  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:01.000929  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:01.015823  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:01.015894  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:01.052284  567375 cri.go:89] found id: ""
	I0414 12:09:01.052317  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.052330  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:01.052338  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:01.052401  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:01.085311  567375 cri.go:89] found id: ""
	I0414 12:09:01.085339  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.085347  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:01.085353  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:01.085405  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:01.119553  567375 cri.go:89] found id: ""
	I0414 12:09:01.119586  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.119595  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:01.119604  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:01.119670  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:01.153276  567375 cri.go:89] found id: ""
	I0414 12:09:01.153317  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.153330  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:01.153338  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:01.153403  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:01.190574  567375 cri.go:89] found id: ""
	I0414 12:09:01.190601  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.190609  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:01.190615  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:01.190672  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:01.230751  567375 cri.go:89] found id: ""
	I0414 12:09:01.230788  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.230799  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:01.230808  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:01.230878  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:01.272517  567375 cri.go:89] found id: ""
	I0414 12:09:01.272542  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.272549  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:01.272555  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:01.272603  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:01.306305  567375 cri.go:89] found id: ""
	I0414 12:09:01.306334  567375 logs.go:282] 0 containers: []
	W0414 12:09:01.306346  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:01.306356  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:01.306368  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:01.362655  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:01.362699  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:01.376368  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:01.376398  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:01.461134  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:01.461168  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:01.461186  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:01.543867  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:01.543912  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:04.081301  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:04.094270  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:04.094332  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:04.127992  567375 cri.go:89] found id: ""
	I0414 12:09:04.128032  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.128045  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:04.128054  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:04.128119  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:04.162135  567375 cri.go:89] found id: ""
	I0414 12:09:04.162172  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.162185  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:04.162192  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:04.162259  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:04.195161  567375 cri.go:89] found id: ""
	I0414 12:09:04.195198  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.195210  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:04.195218  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:04.195276  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:04.227046  567375 cri.go:89] found id: ""
	I0414 12:09:04.227081  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.227091  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:04.227097  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:04.227165  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:04.265375  567375 cri.go:89] found id: ""
	I0414 12:09:04.265408  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.265416  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:04.265424  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:04.265508  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:04.299861  567375 cri.go:89] found id: ""
	I0414 12:09:04.299887  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.299895  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:04.299901  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:04.299960  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:04.333564  567375 cri.go:89] found id: ""
	I0414 12:09:04.333598  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.333610  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:04.333619  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:04.333682  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:04.367909  567375 cri.go:89] found id: ""
	I0414 12:09:04.367938  567375 logs.go:282] 0 containers: []
	W0414 12:09:04.367948  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:04.367961  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:04.367974  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:04.442267  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:04.442310  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:04.482718  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:04.482755  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:04.537427  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:04.537468  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:04.551208  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:04.551242  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:04.621140  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:07.123060  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:07.135611  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:07.135687  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:07.170244  567375 cri.go:89] found id: ""
	I0414 12:09:07.170279  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.170291  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:07.170299  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:07.170362  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:07.203512  567375 cri.go:89] found id: ""
	I0414 12:09:07.203544  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.203552  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:07.203558  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:07.203611  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:07.237430  567375 cri.go:89] found id: ""
	I0414 12:09:07.237464  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.237475  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:07.237483  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:07.237549  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:07.270484  567375 cri.go:89] found id: ""
	I0414 12:09:07.270516  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.270526  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:07.270533  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:07.270595  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:07.304552  567375 cri.go:89] found id: ""
	I0414 12:09:07.304584  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.304595  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:07.304603  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:07.304664  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:07.337261  567375 cri.go:89] found id: ""
	I0414 12:09:07.337291  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.337300  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:07.337306  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:07.337368  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:07.381449  567375 cri.go:89] found id: ""
	I0414 12:09:07.381486  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.381548  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:07.381563  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:07.381621  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:07.414909  567375 cri.go:89] found id: ""
	I0414 12:09:07.414947  567375 logs.go:282] 0 containers: []
	W0414 12:09:07.414960  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:07.414973  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:07.414990  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:07.453213  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:07.453261  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:07.504346  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:07.504393  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:07.517967  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:07.517993  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:07.594022  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:07.594062  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:07.594078  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:10.172715  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:10.186285  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:10.186369  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:10.219775  567375 cri.go:89] found id: ""
	I0414 12:09:10.219810  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.219823  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:10.219831  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:10.219908  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:10.253978  567375 cri.go:89] found id: ""
	I0414 12:09:10.254006  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.254014  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:10.254020  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:10.254073  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:10.290924  567375 cri.go:89] found id: ""
	I0414 12:09:10.290969  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.290977  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:10.290983  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:10.291057  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:10.329068  567375 cri.go:89] found id: ""
	I0414 12:09:10.329099  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.329110  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:10.329118  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:10.329189  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:10.369838  567375 cri.go:89] found id: ""
	I0414 12:09:10.369873  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.369882  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:10.369888  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:10.369944  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:10.403744  567375 cri.go:89] found id: ""
	I0414 12:09:10.403781  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.403793  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:10.403800  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:10.403866  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:10.437235  567375 cri.go:89] found id: ""
	I0414 12:09:10.437260  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.437269  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:10.437275  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:10.437341  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:10.473256  567375 cri.go:89] found id: ""
	I0414 12:09:10.473285  567375 logs.go:282] 0 containers: []
	W0414 12:09:10.473296  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:10.473307  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:10.473329  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:10.525078  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:10.525117  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:10.538974  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:10.539032  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:10.607822  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:10.607851  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:10.607865  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:10.691725  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:10.691780  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:13.233845  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:13.246857  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:13.246944  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:13.286390  567375 cri.go:89] found id: ""
	I0414 12:09:13.286419  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.286427  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:13.286436  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:13.286487  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:13.324188  567375 cri.go:89] found id: ""
	I0414 12:09:13.324220  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.324233  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:13.324240  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:13.324307  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:13.358273  567375 cri.go:89] found id: ""
	I0414 12:09:13.358305  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.358319  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:13.358326  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:13.358380  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:13.391351  567375 cri.go:89] found id: ""
	I0414 12:09:13.391380  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.391391  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:13.391398  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:13.391451  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:13.424240  567375 cri.go:89] found id: ""
	I0414 12:09:13.424270  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.424278  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:13.424284  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:13.424346  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:13.457664  567375 cri.go:89] found id: ""
	I0414 12:09:13.457700  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.457712  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:13.457720  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:13.457787  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:13.490986  567375 cri.go:89] found id: ""
	I0414 12:09:13.491020  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.491031  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:13.491039  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:13.491126  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:13.527968  567375 cri.go:89] found id: ""
	I0414 12:09:13.528002  567375 logs.go:282] 0 containers: []
	W0414 12:09:13.528044  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:13.528074  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:13.528096  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:13.604331  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:13.604359  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:13.604376  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:13.683554  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:13.683603  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:13.725310  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:13.725352  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:13.776459  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:13.776493  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:16.290859  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:16.303151  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:16.303233  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:16.335928  567375 cri.go:89] found id: ""
	I0414 12:09:16.335962  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.335974  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:16.335982  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:16.336058  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:16.368906  567375 cri.go:89] found id: ""
	I0414 12:09:16.368936  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.368945  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:16.368951  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:16.369005  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:16.404659  567375 cri.go:89] found id: ""
	I0414 12:09:16.404697  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.404710  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:16.404725  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:16.404795  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:16.440560  567375 cri.go:89] found id: ""
	I0414 12:09:16.440592  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.440601  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:16.440607  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:16.440663  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:16.473941  567375 cri.go:89] found id: ""
	I0414 12:09:16.473974  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.473988  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:16.473996  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:16.474064  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:16.506540  567375 cri.go:89] found id: ""
	I0414 12:09:16.506572  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.506581  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:16.506587  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:16.506639  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:16.539266  567375 cri.go:89] found id: ""
	I0414 12:09:16.539305  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.539317  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:16.539325  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:16.539383  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:16.575652  567375 cri.go:89] found id: ""
	I0414 12:09:16.575685  567375 logs.go:282] 0 containers: []
	W0414 12:09:16.575695  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:16.575708  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:16.575725  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:16.626224  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:16.626266  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:16.639646  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:16.639677  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:16.707099  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:16.707125  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:16.707142  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:16.787965  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:16.788008  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:19.332268  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:19.344928  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:19.344994  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:19.379019  567375 cri.go:89] found id: ""
	I0414 12:09:19.379041  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.379049  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:19.379060  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:19.379110  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:19.416702  567375 cri.go:89] found id: ""
	I0414 12:09:19.416733  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.416744  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:19.416752  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:19.416818  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:19.454383  567375 cri.go:89] found id: ""
	I0414 12:09:19.454414  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.454426  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:19.454435  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:19.454504  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:19.490462  567375 cri.go:89] found id: ""
	I0414 12:09:19.490498  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.490512  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:19.490522  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:19.490596  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:19.524619  567375 cri.go:89] found id: ""
	I0414 12:09:19.524651  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.524661  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:19.524670  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:19.524733  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:19.558022  567375 cri.go:89] found id: ""
	I0414 12:09:19.558055  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.558067  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:19.558075  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:19.558135  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:19.590597  567375 cri.go:89] found id: ""
	I0414 12:09:19.590622  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.590631  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:19.590637  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:19.590684  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:19.622262  567375 cri.go:89] found id: ""
	I0414 12:09:19.622294  567375 logs.go:282] 0 containers: []
	W0414 12:09:19.622307  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:19.622320  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:19.622333  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:19.677006  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:19.677049  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:19.693174  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:19.693208  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:19.772102  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:19.772127  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:19.772141  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:19.850142  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:19.850180  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:22.391837  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:22.405525  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:22.405608  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:22.440146  567375 cri.go:89] found id: ""
	I0414 12:09:22.440173  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.440182  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:22.440190  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:22.440255  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:22.480640  567375 cri.go:89] found id: ""
	I0414 12:09:22.480669  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.480678  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:22.480684  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:22.480747  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:22.513533  567375 cri.go:89] found id: ""
	I0414 12:09:22.513559  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.513573  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:22.513580  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:22.513631  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:22.546099  567375 cri.go:89] found id: ""
	I0414 12:09:22.546145  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.546158  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:22.546166  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:22.546222  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:22.578905  567375 cri.go:89] found id: ""
	I0414 12:09:22.578933  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.578943  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:22.578950  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:22.579016  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:22.616496  567375 cri.go:89] found id: ""
	I0414 12:09:22.616525  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.616536  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:22.616544  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:22.616614  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:22.650935  567375 cri.go:89] found id: ""
	I0414 12:09:22.650971  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.650983  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:22.650991  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:22.651052  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:22.688314  567375 cri.go:89] found id: ""
	I0414 12:09:22.688344  567375 logs.go:282] 0 containers: []
	W0414 12:09:22.688354  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:22.688363  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:22.688375  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:22.744366  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:22.744417  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:22.758161  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:22.758193  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:22.825249  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:22.825273  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:22.825289  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:22.903366  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:22.903411  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:25.440976  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:25.454760  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:25.454842  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:25.493183  567375 cri.go:89] found id: ""
	I0414 12:09:25.493218  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.493227  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:25.493234  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:25.493318  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:25.527661  567375 cri.go:89] found id: ""
	I0414 12:09:25.527692  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.527701  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:25.527707  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:25.527794  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:25.565673  567375 cri.go:89] found id: ""
	I0414 12:09:25.565702  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.565714  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:25.565722  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:25.565776  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:25.597079  567375 cri.go:89] found id: ""
	I0414 12:09:25.597117  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.597125  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:25.597132  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:25.597192  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:25.631576  567375 cri.go:89] found id: ""
	I0414 12:09:25.631609  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.631619  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:25.631628  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:25.631704  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:25.668941  567375 cri.go:89] found id: ""
	I0414 12:09:25.668972  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.668985  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:25.668993  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:25.669065  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:25.708806  567375 cri.go:89] found id: ""
	I0414 12:09:25.708833  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.708845  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:25.708853  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:25.708926  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:25.742040  567375 cri.go:89] found id: ""
	I0414 12:09:25.742068  567375 logs.go:282] 0 containers: []
	W0414 12:09:25.742076  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:25.742085  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:25.742101  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:25.793860  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:25.793900  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:25.807581  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:25.807630  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:25.883460  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:25.883492  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:25.883511  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:25.963787  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:25.963839  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:28.502543  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:28.515508  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:28.515574  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:28.548613  567375 cri.go:89] found id: ""
	I0414 12:09:28.548657  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.548670  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:28.548678  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:28.548741  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:28.584226  567375 cri.go:89] found id: ""
	I0414 12:09:28.584259  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.584268  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:28.584273  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:28.584335  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:28.617058  567375 cri.go:89] found id: ""
	I0414 12:09:28.617102  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.617115  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:28.617123  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:28.617181  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:28.648929  567375 cri.go:89] found id: ""
	I0414 12:09:28.648962  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.648974  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:28.648984  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:28.649051  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:28.686184  567375 cri.go:89] found id: ""
	I0414 12:09:28.686223  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.686237  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:28.686247  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:28.686320  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:28.721320  567375 cri.go:89] found id: ""
	I0414 12:09:28.721358  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.721370  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:28.721380  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:28.721437  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:28.754095  567375 cri.go:89] found id: ""
	I0414 12:09:28.754129  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.754141  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:28.754158  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:28.754223  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:28.789349  567375 cri.go:89] found id: ""
	I0414 12:09:28.789382  567375 logs.go:282] 0 containers: []
	W0414 12:09:28.789399  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:28.789413  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:28.789432  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:28.826594  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:28.826626  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:28.877014  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:28.877056  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:28.890692  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:28.890725  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:28.965110  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:28.965131  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:28.965144  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:31.544474  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:31.557227  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:31.557302  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:31.588857  567375 cri.go:89] found id: ""
	I0414 12:09:31.588895  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.588906  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:31.588914  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:31.588978  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:31.623412  567375 cri.go:89] found id: ""
	I0414 12:09:31.623445  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.623456  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:31.623463  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:31.623534  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:31.657748  567375 cri.go:89] found id: ""
	I0414 12:09:31.657775  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.657784  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:31.657790  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:31.657851  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:31.693246  567375 cri.go:89] found id: ""
	I0414 12:09:31.693272  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.693287  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:31.693293  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:31.693345  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:31.725976  567375 cri.go:89] found id: ""
	I0414 12:09:31.726009  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.726021  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:31.726029  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:31.726099  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:31.758471  567375 cri.go:89] found id: ""
	I0414 12:09:31.758507  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.758519  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:31.758527  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:31.758583  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:31.793025  567375 cri.go:89] found id: ""
	I0414 12:09:31.793059  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.793071  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:31.793079  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:31.793142  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:31.826783  567375 cri.go:89] found id: ""
	I0414 12:09:31.826811  567375 logs.go:282] 0 containers: []
	W0414 12:09:31.826822  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:31.826834  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:31.826852  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:31.882513  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:31.882557  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:31.895920  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:31.895950  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:31.961089  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:31.961110  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:31.961123  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:32.035866  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:32.035915  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:34.585633  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:34.599205  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:34.599304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:34.631732  567375 cri.go:89] found id: ""
	I0414 12:09:34.631763  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.631775  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:34.631785  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:34.631856  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:34.665272  567375 cri.go:89] found id: ""
	I0414 12:09:34.665297  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.665305  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:34.665311  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:34.665364  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:34.702980  567375 cri.go:89] found id: ""
	I0414 12:09:34.703006  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.703015  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:34.703021  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:34.703076  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:34.735580  567375 cri.go:89] found id: ""
	I0414 12:09:34.735668  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.735692  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:34.735704  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:34.735770  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:34.769438  567375 cri.go:89] found id: ""
	I0414 12:09:34.769475  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.769487  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:34.769495  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:34.769570  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:34.807932  567375 cri.go:89] found id: ""
	I0414 12:09:34.807971  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.807982  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:34.807989  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:34.808080  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:34.838962  567375 cri.go:89] found id: ""
	I0414 12:09:34.838988  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.838996  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:34.839006  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:34.839060  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:34.871469  567375 cri.go:89] found id: ""
	I0414 12:09:34.871495  567375 logs.go:282] 0 containers: []
	W0414 12:09:34.871505  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:34.871518  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:34.871534  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:34.922022  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:34.922062  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:34.935732  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:34.935762  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:35.000054  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:35.000086  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:35.000106  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:35.075520  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:35.075561  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:37.614204  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:37.627002  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:37.627087  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:37.659903  567375 cri.go:89] found id: ""
	I0414 12:09:37.659930  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.659939  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:37.659946  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:37.659997  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:37.693099  567375 cri.go:89] found id: ""
	I0414 12:09:37.693137  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.693149  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:37.693157  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:37.693223  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:37.728098  567375 cri.go:89] found id: ""
	I0414 12:09:37.728135  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.728148  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:37.728160  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:37.728229  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:37.760113  567375 cri.go:89] found id: ""
	I0414 12:09:37.760140  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.760150  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:37.760155  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:37.760213  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:37.791829  567375 cri.go:89] found id: ""
	I0414 12:09:37.791859  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.791867  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:37.791874  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:37.791939  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:37.844164  567375 cri.go:89] found id: ""
	I0414 12:09:37.844197  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.844206  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:37.844213  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:37.844271  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:37.878383  567375 cri.go:89] found id: ""
	I0414 12:09:37.878414  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.878423  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:37.878431  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:37.878513  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:37.912075  567375 cri.go:89] found id: ""
	I0414 12:09:37.912103  567375 logs.go:282] 0 containers: []
	W0414 12:09:37.912124  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:37.912135  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:37.912151  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:37.963680  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:37.963724  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:37.977345  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:37.977381  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:38.049453  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:38.049475  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:38.049490  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:38.128441  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:38.128482  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:40.666161  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:40.678321  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:40.678390  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:40.712055  567375 cri.go:89] found id: ""
	I0414 12:09:40.712088  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.712103  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:40.712121  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:40.712184  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:40.743158  567375 cri.go:89] found id: ""
	I0414 12:09:40.743188  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.743199  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:40.743204  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:40.743261  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:40.774684  567375 cri.go:89] found id: ""
	I0414 12:09:40.774720  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.774731  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:40.774739  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:40.774793  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:40.808209  567375 cri.go:89] found id: ""
	I0414 12:09:40.808234  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.808243  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:40.808248  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:40.808307  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:40.844909  567375 cri.go:89] found id: ""
	I0414 12:09:40.844946  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.844957  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:40.844966  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:40.845028  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:40.879900  567375 cri.go:89] found id: ""
	I0414 12:09:40.879932  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.879943  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:40.879959  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:40.880028  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:40.911334  567375 cri.go:89] found id: ""
	I0414 12:09:40.911370  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.911383  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:40.911392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:40.911461  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:40.946479  567375 cri.go:89] found id: ""
	I0414 12:09:40.946517  567375 logs.go:282] 0 containers: []
	W0414 12:09:40.946529  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:40.946542  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:40.946556  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:40.998769  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:40.998810  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:41.012402  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:41.012433  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:41.082652  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:41.082679  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:41.082696  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:41.160429  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:41.160477  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:43.703385  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:43.716524  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:43.716592  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:43.752329  567375 cri.go:89] found id: ""
	I0414 12:09:43.752382  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.752394  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:43.752403  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:43.752479  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:43.794558  567375 cri.go:89] found id: ""
	I0414 12:09:43.794593  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.794608  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:43.794615  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:43.794678  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:43.829140  567375 cri.go:89] found id: ""
	I0414 12:09:43.829176  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.829188  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:43.829203  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:43.829273  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:43.866503  567375 cri.go:89] found id: ""
	I0414 12:09:43.866540  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.866552  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:43.866560  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:43.866627  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:43.900865  567375 cri.go:89] found id: ""
	I0414 12:09:43.900902  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.900911  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:43.900917  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:43.900978  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:43.936159  567375 cri.go:89] found id: ""
	I0414 12:09:43.936193  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.936204  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:43.936210  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:43.936269  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:43.968867  567375 cri.go:89] found id: ""
	I0414 12:09:43.968894  567375 logs.go:282] 0 containers: []
	W0414 12:09:43.968902  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:43.968908  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:43.968974  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:44.005033  567375 cri.go:89] found id: ""
	I0414 12:09:44.005064  567375 logs.go:282] 0 containers: []
	W0414 12:09:44.005076  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:44.005097  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:44.005124  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:44.059915  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:44.059956  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:44.073473  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:44.073502  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:44.138643  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:44.138672  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:44.138687  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:44.219523  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:44.219585  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:46.800036  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:46.812839  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:46.812937  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:46.852433  567375 cri.go:89] found id: ""
	I0414 12:09:46.852463  567375 logs.go:282] 0 containers: []
	W0414 12:09:46.852474  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:46.852482  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:46.852563  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:46.888570  567375 cri.go:89] found id: ""
	I0414 12:09:46.888604  567375 logs.go:282] 0 containers: []
	W0414 12:09:46.888616  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:46.888625  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:46.888702  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:46.926218  567375 cri.go:89] found id: ""
	I0414 12:09:46.926246  567375 logs.go:282] 0 containers: []
	W0414 12:09:46.926258  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:46.926266  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:46.926332  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:46.971410  567375 cri.go:89] found id: ""
	I0414 12:09:46.971448  567375 logs.go:282] 0 containers: []
	W0414 12:09:46.971477  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:46.971489  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:46.971569  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:47.007421  567375 cri.go:89] found id: ""
	I0414 12:09:47.007447  567375 logs.go:282] 0 containers: []
	W0414 12:09:47.007456  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:47.007462  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:47.007563  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:47.047575  567375 cri.go:89] found id: ""
	I0414 12:09:47.047613  567375 logs.go:282] 0 containers: []
	W0414 12:09:47.047626  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:47.047635  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:47.047706  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:47.081478  567375 cri.go:89] found id: ""
	I0414 12:09:47.081505  567375 logs.go:282] 0 containers: []
	W0414 12:09:47.081514  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:47.081521  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:47.081575  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:47.116920  567375 cri.go:89] found id: ""
	I0414 12:09:47.116951  567375 logs.go:282] 0 containers: []
	W0414 12:09:47.116961  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:47.116970  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:47.116982  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:47.168214  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:47.168256  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:47.184601  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:47.184649  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:47.253991  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:47.254014  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:47.254033  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:47.333728  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:47.333766  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:49.872496  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:49.887105  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:49.887195  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:49.922688  567375 cri.go:89] found id: ""
	I0414 12:09:49.922738  567375 logs.go:282] 0 containers: []
	W0414 12:09:49.922753  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:49.922762  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:49.922838  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:49.961447  567375 cri.go:89] found id: ""
	I0414 12:09:49.961479  567375 logs.go:282] 0 containers: []
	W0414 12:09:49.961492  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:49.961500  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:49.961565  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:50.006694  567375 cri.go:89] found id: ""
	I0414 12:09:50.006728  567375 logs.go:282] 0 containers: []
	W0414 12:09:50.006739  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:50.006747  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:50.006817  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:50.055384  567375 cri.go:89] found id: ""
	I0414 12:09:50.055413  567375 logs.go:282] 0 containers: []
	W0414 12:09:50.055426  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:50.055434  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:50.055500  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:50.100214  567375 cri.go:89] found id: ""
	I0414 12:09:50.100234  567375 logs.go:282] 0 containers: []
	W0414 12:09:50.100242  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:50.100249  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:50.100314  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:50.147423  567375 cri.go:89] found id: ""
	I0414 12:09:50.147446  567375 logs.go:282] 0 containers: []
	W0414 12:09:50.147457  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:50.147465  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:50.147513  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:50.190120  567375 cri.go:89] found id: ""
	I0414 12:09:50.190159  567375 logs.go:282] 0 containers: []
	W0414 12:09:50.190172  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:50.190180  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:50.190247  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:50.224798  567375 cri.go:89] found id: ""
	I0414 12:09:50.224832  567375 logs.go:282] 0 containers: []
	W0414 12:09:50.224844  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:50.224856  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:50.224871  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:50.290125  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:50.290165  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:50.304374  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:50.304406  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:50.384509  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:50.384540  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:50.384557  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:50.467075  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:50.467118  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:53.017711  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:53.031343  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:53.031424  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:53.072458  567375 cri.go:89] found id: ""
	I0414 12:09:53.072493  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.072507  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:53.072519  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:53.072584  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:53.108751  567375 cri.go:89] found id: ""
	I0414 12:09:53.108784  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.108795  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:53.108801  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:53.108876  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:53.144305  567375 cri.go:89] found id: ""
	I0414 12:09:53.144338  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.144350  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:53.144358  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:53.144427  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:53.183164  567375 cri.go:89] found id: ""
	I0414 12:09:53.183206  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.183216  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:53.183224  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:53.183314  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:53.217430  567375 cri.go:89] found id: ""
	I0414 12:09:53.217461  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.217473  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:53.217481  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:53.217547  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:53.250508  567375 cri.go:89] found id: ""
	I0414 12:09:53.250533  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.250541  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:53.250548  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:53.250599  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:53.285639  567375 cri.go:89] found id: ""
	I0414 12:09:53.285678  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.285697  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:53.285705  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:53.285776  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:53.322286  567375 cri.go:89] found id: ""
	I0414 12:09:53.322324  567375 logs.go:282] 0 containers: []
	W0414 12:09:53.322337  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:53.322350  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:53.322367  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:53.370870  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:53.370917  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:53.449155  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:53.449204  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:53.465153  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:53.465198  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:53.557288  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:53.557313  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:53.557325  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:56.138000  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:56.152583  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:56.152649  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:56.191068  567375 cri.go:89] found id: ""
	I0414 12:09:56.191105  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.191129  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:56.191139  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:56.191215  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:56.226692  567375 cri.go:89] found id: ""
	I0414 12:09:56.226718  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.226727  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:56.226733  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:56.226784  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:56.273655  567375 cri.go:89] found id: ""
	I0414 12:09:56.273683  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.273692  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:56.273698  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:56.273761  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:56.317803  567375 cri.go:89] found id: ""
	I0414 12:09:56.317838  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.317850  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:56.317860  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:56.317936  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:56.362443  567375 cri.go:89] found id: ""
	I0414 12:09:56.362539  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.362557  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:56.362566  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:56.362631  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:56.396180  567375 cri.go:89] found id: ""
	I0414 12:09:56.396213  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.396225  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:56.396234  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:56.396307  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:56.433921  567375 cri.go:89] found id: ""
	I0414 12:09:56.433955  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.433965  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:56.433972  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:56.434125  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:56.475681  567375 cri.go:89] found id: ""
	I0414 12:09:56.475711  567375 logs.go:282] 0 containers: []
	W0414 12:09:56.475723  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:56.475735  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:56.475750  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:56.489750  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:56.489788  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:56.580927  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:56.580965  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:56.580984  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:56.671163  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:56.671204  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:09:56.713828  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:56.713866  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:59.278953  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:09:59.292894  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:09:59.292959  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:09:59.328589  567375 cri.go:89] found id: ""
	I0414 12:09:59.328620  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.328628  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:09:59.328644  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:09:59.328706  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:09:59.365892  567375 cri.go:89] found id: ""
	I0414 12:09:59.365928  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.365939  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:09:59.365948  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:09:59.366004  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:09:59.411081  567375 cri.go:89] found id: ""
	I0414 12:09:59.411112  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.411123  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:09:59.411137  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:09:59.411213  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:09:59.446776  567375 cri.go:89] found id: ""
	I0414 12:09:59.446815  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.446824  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:09:59.446831  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:09:59.446884  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:09:59.483398  567375 cri.go:89] found id: ""
	I0414 12:09:59.483434  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.483446  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:09:59.483453  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:09:59.483522  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:09:59.521122  567375 cri.go:89] found id: ""
	I0414 12:09:59.521162  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.521175  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:09:59.521184  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:09:59.521254  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:09:59.554599  567375 cri.go:89] found id: ""
	I0414 12:09:59.554636  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.554650  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:09:59.554658  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:09:59.554724  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:09:59.589266  567375 cri.go:89] found id: ""
	I0414 12:09:59.589303  567375 logs.go:282] 0 containers: []
	W0414 12:09:59.589315  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:09:59.589328  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:09:59.589343  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:09:59.640676  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:09:59.640722  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:09:59.655085  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:09:59.655119  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:09:59.723834  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:09:59.723936  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:09:59.723965  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:09:59.821981  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:09:59.822024  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:02.366551  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:02.381064  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:02.381141  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:02.422370  567375 cri.go:89] found id: ""
	I0414 12:10:02.422407  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.422419  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:02.422428  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:02.422496  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:02.461706  567375 cri.go:89] found id: ""
	I0414 12:10:02.461740  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.461752  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:02.461760  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:02.461823  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:02.504100  567375 cri.go:89] found id: ""
	I0414 12:10:02.504137  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.504150  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:02.504158  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:02.504225  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:02.543314  567375 cri.go:89] found id: ""
	I0414 12:10:02.543348  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.543361  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:02.543369  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:02.543435  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:02.575461  567375 cri.go:89] found id: ""
	I0414 12:10:02.575495  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.575516  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:02.575524  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:02.575597  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:02.609300  567375 cri.go:89] found id: ""
	I0414 12:10:02.609335  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.609347  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:02.609354  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:02.609421  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:02.642019  567375 cri.go:89] found id: ""
	I0414 12:10:02.642055  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.642064  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:02.642076  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:02.642126  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:02.676666  567375 cri.go:89] found id: ""
	I0414 12:10:02.676694  567375 logs.go:282] 0 containers: []
	W0414 12:10:02.676702  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:02.676712  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:02.676726  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:02.690924  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:02.690956  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:02.772864  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:02.772909  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:02.772927  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:02.862597  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:02.862630  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:02.914169  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:02.914204  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:05.473330  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:05.486745  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:05.486845  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:05.519378  567375 cri.go:89] found id: ""
	I0414 12:10:05.519408  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.519419  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:05.519426  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:05.519490  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:05.562112  567375 cri.go:89] found id: ""
	I0414 12:10:05.562197  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.562216  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:05.562231  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:05.562304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:05.597388  567375 cri.go:89] found id: ""
	I0414 12:10:05.597425  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.597437  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:05.597445  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:05.597510  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:05.631516  567375 cri.go:89] found id: ""
	I0414 12:10:05.631551  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.631562  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:05.631570  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:05.631639  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:05.669577  567375 cri.go:89] found id: ""
	I0414 12:10:05.669613  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.669624  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:05.669633  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:05.669700  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:05.706141  567375 cri.go:89] found id: ""
	I0414 12:10:05.706174  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.706186  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:05.706194  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:05.706246  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:05.743842  567375 cri.go:89] found id: ""
	I0414 12:10:05.743876  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.743888  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:05.743897  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:05.743972  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:05.782963  567375 cri.go:89] found id: ""
	I0414 12:10:05.782992  567375 logs.go:282] 0 containers: []
	W0414 12:10:05.783005  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:05.783016  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:05.783031  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:05.871224  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:05.871267  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:05.909592  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:05.909681  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:05.961520  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:05.961568  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:05.975322  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:05.975369  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:06.046989  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:08.547708  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:08.561234  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:08.561321  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:08.598453  567375 cri.go:89] found id: ""
	I0414 12:10:08.598486  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.598495  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:08.598502  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:08.598573  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:08.636447  567375 cri.go:89] found id: ""
	I0414 12:10:08.636477  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.636486  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:08.636492  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:08.636559  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:08.673977  567375 cri.go:89] found id: ""
	I0414 12:10:08.674010  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.674021  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:08.674031  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:08.674099  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:08.712933  567375 cri.go:89] found id: ""
	I0414 12:10:08.712969  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.712981  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:08.712989  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:08.713053  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:08.747228  567375 cri.go:89] found id: ""
	I0414 12:10:08.747259  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.747267  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:08.747274  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:08.747360  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:08.780932  567375 cri.go:89] found id: ""
	I0414 12:10:08.780964  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.780973  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:08.780979  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:08.781034  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:08.814276  567375 cri.go:89] found id: ""
	I0414 12:10:08.814308  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.814316  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:08.814323  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:08.814374  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:08.850569  567375 cri.go:89] found id: ""
	I0414 12:10:08.850593  567375 logs.go:282] 0 containers: []
	W0414 12:10:08.850600  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:08.850609  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:08.850619  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:08.907658  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:08.907698  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:08.922313  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:08.922344  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:08.995279  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:08.995327  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:08.995343  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:09.085433  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:09.085491  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:11.629654  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:11.642435  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:11.642513  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:11.680730  567375 cri.go:89] found id: ""
	I0414 12:10:11.680764  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.680776  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:11.680785  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:11.680859  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:11.719395  567375 cri.go:89] found id: ""
	I0414 12:10:11.719428  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.719438  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:11.719444  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:11.719507  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:11.757952  567375 cri.go:89] found id: ""
	I0414 12:10:11.757987  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.757997  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:11.758003  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:11.758063  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:11.790739  567375 cri.go:89] found id: ""
	I0414 12:10:11.790772  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.790783  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:11.790791  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:11.790855  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:11.824200  567375 cri.go:89] found id: ""
	I0414 12:10:11.824231  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.824239  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:11.824245  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:11.824304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:11.860587  567375 cri.go:89] found id: ""
	I0414 12:10:11.860615  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.860635  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:11.860645  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:11.860730  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:11.896767  567375 cri.go:89] found id: ""
	I0414 12:10:11.896796  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.896806  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:11.896824  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:11.896898  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:11.936587  567375 cri.go:89] found id: ""
	I0414 12:10:11.936627  567375 logs.go:282] 0 containers: []
	W0414 12:10:11.936639  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:11.936650  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:11.936665  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:11.986854  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:11.986895  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:12.000269  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:12.000305  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:12.073415  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:12.073450  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:12.073466  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:12.161153  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:12.161193  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:14.705313  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:14.719105  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:14.719173  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:14.757746  567375 cri.go:89] found id: ""
	I0414 12:10:14.757782  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.757792  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:14.757799  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:14.757874  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:14.791564  567375 cri.go:89] found id: ""
	I0414 12:10:14.791596  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.791605  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:14.791612  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:14.791669  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:14.826458  567375 cri.go:89] found id: ""
	I0414 12:10:14.826486  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.826495  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:14.826502  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:14.826561  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:14.860763  567375 cri.go:89] found id: ""
	I0414 12:10:14.860794  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.860807  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:14.860816  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:14.860881  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:14.896162  567375 cri.go:89] found id: ""
	I0414 12:10:14.896198  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.896210  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:14.896218  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:14.896288  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:14.930211  567375 cri.go:89] found id: ""
	I0414 12:10:14.930246  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.930256  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:14.930262  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:14.930312  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:14.965717  567375 cri.go:89] found id: ""
	I0414 12:10:14.965749  567375 logs.go:282] 0 containers: []
	W0414 12:10:14.965761  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:14.965768  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:14.965829  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:15.000426  567375 cri.go:89] found id: ""
	I0414 12:10:15.000481  567375 logs.go:282] 0 containers: []
	W0414 12:10:15.000493  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:15.000505  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:15.000520  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:15.037736  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:15.037771  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:15.087578  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:15.087622  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:15.101567  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:15.101601  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:15.176166  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:15.176189  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:15.176203  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:17.760290  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:17.775229  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:17.775322  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:17.811064  567375 cri.go:89] found id: ""
	I0414 12:10:17.811088  567375 logs.go:282] 0 containers: []
	W0414 12:10:17.811119  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:17.811132  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:17.811256  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:17.851712  567375 cri.go:89] found id: ""
	I0414 12:10:17.851746  567375 logs.go:282] 0 containers: []
	W0414 12:10:17.851763  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:17.851771  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:17.851869  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:17.893301  567375 cri.go:89] found id: ""
	I0414 12:10:17.893337  567375 logs.go:282] 0 containers: []
	W0414 12:10:17.893351  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:17.893359  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:17.893427  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:17.927765  567375 cri.go:89] found id: ""
	I0414 12:10:17.927795  567375 logs.go:282] 0 containers: []
	W0414 12:10:17.927808  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:17.927817  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:17.927885  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:17.964962  567375 cri.go:89] found id: ""
	I0414 12:10:17.964994  567375 logs.go:282] 0 containers: []
	W0414 12:10:17.965005  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:17.965015  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:17.965088  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:17.998901  567375 cri.go:89] found id: ""
	I0414 12:10:17.998940  567375 logs.go:282] 0 containers: []
	W0414 12:10:17.998951  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:17.998960  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:17.999031  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:18.032433  567375 cri.go:89] found id: ""
	I0414 12:10:18.032470  567375 logs.go:282] 0 containers: []
	W0414 12:10:18.032483  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:18.032490  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:18.032555  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:18.069530  567375 cri.go:89] found id: ""
	I0414 12:10:18.069563  567375 logs.go:282] 0 containers: []
	W0414 12:10:18.069585  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:18.069599  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:18.069616  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:18.084431  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:18.084459  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:18.159382  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:18.159406  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:18.159423  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:18.267213  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:18.267259  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:18.311343  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:18.311385  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:20.863415  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:20.876805  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:20.876876  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:20.911250  567375 cri.go:89] found id: ""
	I0414 12:10:20.911280  567375 logs.go:282] 0 containers: []
	W0414 12:10:20.911319  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:20.911328  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:20.911396  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:20.945564  567375 cri.go:89] found id: ""
	I0414 12:10:20.945594  567375 logs.go:282] 0 containers: []
	W0414 12:10:20.945606  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:20.945613  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:20.945678  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:20.979788  567375 cri.go:89] found id: ""
	I0414 12:10:20.979812  567375 logs.go:282] 0 containers: []
	W0414 12:10:20.979823  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:20.979831  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:20.979884  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:21.015402  567375 cri.go:89] found id: ""
	I0414 12:10:21.015429  567375 logs.go:282] 0 containers: []
	W0414 12:10:21.015438  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:21.015451  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:21.015504  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:21.050322  567375 cri.go:89] found id: ""
	I0414 12:10:21.050348  567375 logs.go:282] 0 containers: []
	W0414 12:10:21.050364  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:21.050370  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:21.050436  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:21.082100  567375 cri.go:89] found id: ""
	I0414 12:10:21.082131  567375 logs.go:282] 0 containers: []
	W0414 12:10:21.082140  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:21.082146  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:21.082215  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:21.120344  567375 cri.go:89] found id: ""
	I0414 12:10:21.120380  567375 logs.go:282] 0 containers: []
	W0414 12:10:21.120392  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:21.120401  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:21.120470  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:21.156856  567375 cri.go:89] found id: ""
	I0414 12:10:21.156887  567375 logs.go:282] 0 containers: []
	W0414 12:10:21.156896  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:21.156907  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:21.156918  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:21.200158  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:21.200192  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:21.264565  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:21.264601  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:21.277701  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:21.277729  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:21.346631  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:21.346656  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:21.346670  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:23.932487  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:23.948988  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:23.949072  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:23.988564  567375 cri.go:89] found id: ""
	I0414 12:10:23.988593  567375 logs.go:282] 0 containers: []
	W0414 12:10:23.988604  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:23.988611  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:23.988696  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:24.029807  567375 cri.go:89] found id: ""
	I0414 12:10:24.029843  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.029859  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:24.029867  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:24.029924  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:24.073169  567375 cri.go:89] found id: ""
	I0414 12:10:24.073191  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.073198  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:24.073205  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:24.073245  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:24.112058  567375 cri.go:89] found id: ""
	I0414 12:10:24.112084  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.112092  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:24.112099  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:24.112145  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:24.149069  567375 cri.go:89] found id: ""
	I0414 12:10:24.149097  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.149109  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:24.149123  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:24.149187  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:24.194765  567375 cri.go:89] found id: ""
	I0414 12:10:24.194798  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.194810  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:24.194819  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:24.194882  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:24.238439  567375 cri.go:89] found id: ""
	I0414 12:10:24.238470  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.238481  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:24.238488  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:24.238552  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:24.275512  567375 cri.go:89] found id: ""
	I0414 12:10:24.275548  567375 logs.go:282] 0 containers: []
	W0414 12:10:24.275560  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:24.275573  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:24.275588  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:24.342662  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:24.342702  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:24.413316  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:24.413353  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:24.428899  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:24.428929  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:24.507012  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:24.507036  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:24.507058  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:27.099983  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:27.112969  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:27.113032  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:27.144765  567375 cri.go:89] found id: ""
	I0414 12:10:27.144802  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.144813  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:27.144822  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:27.144898  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:27.184359  567375 cri.go:89] found id: ""
	I0414 12:10:27.184396  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.184408  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:27.184416  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:27.184494  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:27.223718  567375 cri.go:89] found id: ""
	I0414 12:10:27.223756  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.223768  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:27.223776  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:27.223901  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:27.256446  567375 cri.go:89] found id: ""
	I0414 12:10:27.256475  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.256487  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:27.256496  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:27.256563  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:27.288408  567375 cri.go:89] found id: ""
	I0414 12:10:27.288442  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.288454  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:27.288462  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:27.288518  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:27.320200  567375 cri.go:89] found id: ""
	I0414 12:10:27.320237  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.320250  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:27.320259  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:27.320326  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:27.353545  567375 cri.go:89] found id: ""
	I0414 12:10:27.353574  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.353586  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:27.353597  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:27.353659  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:27.386313  567375 cri.go:89] found id: ""
	I0414 12:10:27.386352  567375 logs.go:282] 0 containers: []
	W0414 12:10:27.386365  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:27.386378  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:27.386397  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:27.437530  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:27.437577  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:27.451982  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:27.452021  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:27.522635  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:27.522664  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:27.522680  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:27.599865  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:27.599906  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:30.147584  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:30.159848  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:30.159942  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:30.195613  567375 cri.go:89] found id: ""
	I0414 12:10:30.195646  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.195657  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:30.195665  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:30.195735  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:30.228262  567375 cri.go:89] found id: ""
	I0414 12:10:30.228299  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.228311  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:30.228319  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:30.228388  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:30.269181  567375 cri.go:89] found id: ""
	I0414 12:10:30.269215  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.269225  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:30.269237  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:30.269307  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:30.308824  567375 cri.go:89] found id: ""
	I0414 12:10:30.308859  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.308871  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:30.308880  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:30.308951  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:30.343354  567375 cri.go:89] found id: ""
	I0414 12:10:30.343391  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.343404  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:30.343412  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:30.343479  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:30.375811  567375 cri.go:89] found id: ""
	I0414 12:10:30.375853  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.375866  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:30.375874  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:30.375939  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:30.411133  567375 cri.go:89] found id: ""
	I0414 12:10:30.411168  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.411177  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:30.411184  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:30.411249  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:30.446952  567375 cri.go:89] found id: ""
	I0414 12:10:30.446994  567375 logs.go:282] 0 containers: []
	W0414 12:10:30.447013  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:30.447027  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:30.447044  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:30.521831  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:30.521880  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:30.521899  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:30.602410  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:30.602457  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:30.648129  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:30.648173  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:30.723723  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:30.723792  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:33.251434  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:33.264870  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:33.265006  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:33.302998  567375 cri.go:89] found id: ""
	I0414 12:10:33.303024  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.303032  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:33.303038  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:33.303094  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:33.336581  567375 cri.go:89] found id: ""
	I0414 12:10:33.336609  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.336616  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:33.336622  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:33.336678  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:33.368858  567375 cri.go:89] found id: ""
	I0414 12:10:33.368890  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.368901  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:33.368909  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:33.368970  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:33.405270  567375 cri.go:89] found id: ""
	I0414 12:10:33.405303  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.405312  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:33.405318  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:33.405370  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:33.439734  567375 cri.go:89] found id: ""
	I0414 12:10:33.439763  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.439775  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:33.439783  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:33.439861  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:33.471982  567375 cri.go:89] found id: ""
	I0414 12:10:33.472016  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.472028  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:33.472036  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:33.472106  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:33.502256  567375 cri.go:89] found id: ""
	I0414 12:10:33.502287  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.502299  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:33.502310  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:33.502378  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:33.540821  567375 cri.go:89] found id: ""
	I0414 12:10:33.540853  567375 logs.go:282] 0 containers: []
	W0414 12:10:33.540866  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:33.540878  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:33.540896  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:33.556513  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:33.556548  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:33.622225  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:33.622254  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:33.622271  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:33.697962  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:33.698004  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:33.738278  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:33.738308  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:36.288162  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:36.300707  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:36.300775  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:36.332193  567375 cri.go:89] found id: ""
	I0414 12:10:36.332231  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.332243  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:36.332251  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:36.332310  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:36.364513  567375 cri.go:89] found id: ""
	I0414 12:10:36.364547  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.364555  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:36.364561  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:36.364624  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:36.396869  567375 cri.go:89] found id: ""
	I0414 12:10:36.396898  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.396907  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:36.396913  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:36.396980  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:36.428520  567375 cri.go:89] found id: ""
	I0414 12:10:36.428552  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.428566  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:36.428573  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:36.428629  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:36.461496  567375 cri.go:89] found id: ""
	I0414 12:10:36.461530  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.461540  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:36.461548  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:36.461613  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:36.495545  567375 cri.go:89] found id: ""
	I0414 12:10:36.495586  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.495599  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:36.495607  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:36.495685  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:36.535350  567375 cri.go:89] found id: ""
	I0414 12:10:36.535380  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.535388  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:36.535394  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:36.535465  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:36.569729  567375 cri.go:89] found id: ""
	I0414 12:10:36.569763  567375 logs.go:282] 0 containers: []
	W0414 12:10:36.569775  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:36.569788  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:36.569802  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:36.645935  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:36.645976  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:36.682103  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:36.682140  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:36.735017  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:36.735063  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:36.748285  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:36.748322  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:36.811764  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:39.313479  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:39.326445  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:39.326517  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:39.356319  567375 cri.go:89] found id: ""
	I0414 12:10:39.356353  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.356366  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:39.356374  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:39.356426  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:39.387824  567375 cri.go:89] found id: ""
	I0414 12:10:39.387854  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.387863  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:39.387870  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:39.387921  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:39.419490  567375 cri.go:89] found id: ""
	I0414 12:10:39.419523  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.419535  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:39.419542  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:39.419610  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:39.450499  567375 cri.go:89] found id: ""
	I0414 12:10:39.450532  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.450544  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:39.450552  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:39.450603  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:39.481571  567375 cri.go:89] found id: ""
	I0414 12:10:39.481607  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.481615  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:39.481622  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:39.481685  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:39.516984  567375 cri.go:89] found id: ""
	I0414 12:10:39.517019  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.517030  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:39.517039  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:39.517101  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:39.551721  567375 cri.go:89] found id: ""
	I0414 12:10:39.551750  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.551760  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:39.551770  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:39.551827  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:39.584987  567375 cri.go:89] found id: ""
	I0414 12:10:39.585014  567375 logs.go:282] 0 containers: []
	W0414 12:10:39.585026  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:39.585035  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:39.585047  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:39.654928  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:39.654953  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:39.654966  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:39.734371  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:39.734411  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:39.771413  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:39.771447  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:39.821858  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:39.821904  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:42.336357  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:42.351795  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:42.351883  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:42.393772  567375 cri.go:89] found id: ""
	I0414 12:10:42.393800  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.393812  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:42.393820  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:42.393879  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:42.445568  567375 cri.go:89] found id: ""
	I0414 12:10:42.445597  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.445605  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:42.445612  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:42.445671  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:42.489823  567375 cri.go:89] found id: ""
	I0414 12:10:42.489852  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.489861  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:42.489868  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:42.489924  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:42.529960  567375 cri.go:89] found id: ""
	I0414 12:10:42.529995  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.530006  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:42.530014  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:42.530084  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:42.562025  567375 cri.go:89] found id: ""
	I0414 12:10:42.562066  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.562077  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:42.562087  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:42.562155  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:42.593306  567375 cri.go:89] found id: ""
	I0414 12:10:42.593339  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.593348  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:42.593354  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:42.593412  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:42.625223  567375 cri.go:89] found id: ""
	I0414 12:10:42.625250  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.625261  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:42.625269  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:42.625328  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:42.657124  567375 cri.go:89] found id: ""
	I0414 12:10:42.657150  567375 logs.go:282] 0 containers: []
	W0414 12:10:42.657159  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:42.657169  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:42.657181  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:42.739813  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:42.739859  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:42.781272  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:42.781301  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:42.829417  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:42.829465  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:42.842243  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:42.842271  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:42.906656  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:45.407359  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:45.422441  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:45.422504  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:45.459910  567375 cri.go:89] found id: ""
	I0414 12:10:45.459946  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.459957  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:45.459965  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:45.460029  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:45.495160  567375 cri.go:89] found id: ""
	I0414 12:10:45.495195  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.495209  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:45.495218  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:45.495283  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:45.528164  567375 cri.go:89] found id: ""
	I0414 12:10:45.528195  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.528204  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:45.528210  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:45.528278  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:45.569550  567375 cri.go:89] found id: ""
	I0414 12:10:45.569579  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.569590  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:45.569599  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:45.569664  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:45.604000  567375 cri.go:89] found id: ""
	I0414 12:10:45.604029  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.604038  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:45.604046  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:45.604113  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:45.634987  567375 cri.go:89] found id: ""
	I0414 12:10:45.635022  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.635035  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:45.635044  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:45.635108  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:45.674055  567375 cri.go:89] found id: ""
	I0414 12:10:45.674092  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.674107  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:45.674117  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:45.674186  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:45.717559  567375 cri.go:89] found id: ""
	I0414 12:10:45.717599  567375 logs.go:282] 0 containers: []
	W0414 12:10:45.717612  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:45.717624  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:45.717639  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:45.792495  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:45.792519  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:45.792535  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:45.884394  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:45.884433  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:45.930834  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:45.930869  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:46.000522  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:46.000571  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:48.515452  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:48.532862  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:48.532928  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:48.576190  567375 cri.go:89] found id: ""
	I0414 12:10:48.576216  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.576224  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:48.576230  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:48.576284  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:48.617369  567375 cri.go:89] found id: ""
	I0414 12:10:48.617402  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.617415  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:48.617423  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:48.617483  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:48.651032  567375 cri.go:89] found id: ""
	I0414 12:10:48.651069  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.651083  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:48.651093  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:48.651158  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:48.691458  567375 cri.go:89] found id: ""
	I0414 12:10:48.691490  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.691500  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:48.691514  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:48.691588  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:48.733585  567375 cri.go:89] found id: ""
	I0414 12:10:48.733615  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.733627  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:48.733635  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:48.733694  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:48.770729  567375 cri.go:89] found id: ""
	I0414 12:10:48.770773  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.770821  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:48.770833  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:48.770909  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:48.813150  567375 cri.go:89] found id: ""
	I0414 12:10:48.813183  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.813196  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:48.813204  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:48.813278  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:48.872880  567375 cri.go:89] found id: ""
	I0414 12:10:48.872915  567375 logs.go:282] 0 containers: []
	W0414 12:10:48.872927  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:48.872940  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:48.872962  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:48.944687  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:48.944741  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:48.960525  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:48.960566  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:49.031728  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:49.031756  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:49.031774  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:49.145020  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:49.145065  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:51.681681  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:51.698008  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:51.698080  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:51.734500  567375 cri.go:89] found id: ""
	I0414 12:10:51.734538  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.734553  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:51.734563  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:51.734652  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:51.776670  567375 cri.go:89] found id: ""
	I0414 12:10:51.776699  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.776710  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:51.776732  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:51.776798  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:51.811029  567375 cri.go:89] found id: ""
	I0414 12:10:51.811058  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.811070  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:51.811077  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:51.811152  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:51.856303  567375 cri.go:89] found id: ""
	I0414 12:10:51.856338  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.856350  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:51.856358  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:51.856421  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:51.899851  567375 cri.go:89] found id: ""
	I0414 12:10:51.899887  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.899899  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:51.899908  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:51.899980  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:51.938272  567375 cri.go:89] found id: ""
	I0414 12:10:51.938301  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.938313  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:51.938322  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:51.938389  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:51.979168  567375 cri.go:89] found id: ""
	I0414 12:10:51.979193  567375 logs.go:282] 0 containers: []
	W0414 12:10:51.979204  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:51.979212  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:51.979267  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:52.023923  567375 cri.go:89] found id: ""
	I0414 12:10:52.023958  567375 logs.go:282] 0 containers: []
	W0414 12:10:52.023969  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:52.023980  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:52.023996  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:52.065780  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:52.065819  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:52.126354  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:52.126388  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:52.140324  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:52.140356  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:52.222092  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:52.222131  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:52.222146  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:54.805408  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:54.823627  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:54.823712  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:54.860509  567375 cri.go:89] found id: ""
	I0414 12:10:54.860541  567375 logs.go:282] 0 containers: []
	W0414 12:10:54.860549  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:54.860555  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:54.860623  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:54.893721  567375 cri.go:89] found id: ""
	I0414 12:10:54.893749  567375 logs.go:282] 0 containers: []
	W0414 12:10:54.893758  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:54.893764  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:54.893832  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:54.925623  567375 cri.go:89] found id: ""
	I0414 12:10:54.925655  567375 logs.go:282] 0 containers: []
	W0414 12:10:54.925665  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:54.925674  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:54.925740  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:54.960450  567375 cri.go:89] found id: ""
	I0414 12:10:54.960491  567375 logs.go:282] 0 containers: []
	W0414 12:10:54.960504  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:54.960513  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:54.960579  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:54.999048  567375 cri.go:89] found id: ""
	I0414 12:10:54.999084  567375 logs.go:282] 0 containers: []
	W0414 12:10:54.999096  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:54.999104  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:54.999180  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:55.037308  567375 cri.go:89] found id: ""
	I0414 12:10:55.037350  567375 logs.go:282] 0 containers: []
	W0414 12:10:55.037363  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:55.037372  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:55.037453  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:55.079685  567375 cri.go:89] found id: ""
	I0414 12:10:55.079721  567375 logs.go:282] 0 containers: []
	W0414 12:10:55.079734  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:55.079742  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:55.079816  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:55.125156  567375 cri.go:89] found id: ""
	I0414 12:10:55.125186  567375 logs.go:282] 0 containers: []
	W0414 12:10:55.125202  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:55.125214  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:55.125230  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:10:55.192496  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:55.192541  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:55.208320  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:55.208348  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:55.276405  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:55.276430  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:55.276446  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:55.366393  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:55.366450  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:57.920361  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:10:57.934873  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:10:57.934957  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:10:57.978269  567375 cri.go:89] found id: ""
	I0414 12:10:57.978298  567375 logs.go:282] 0 containers: []
	W0414 12:10:57.978310  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:10:57.978318  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:10:57.978383  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:10:58.012892  567375 cri.go:89] found id: ""
	I0414 12:10:58.012924  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.012935  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:10:58.012942  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:10:58.013015  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:10:58.061485  567375 cri.go:89] found id: ""
	I0414 12:10:58.061520  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.061533  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:10:58.061541  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:10:58.061605  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:10:58.099026  567375 cri.go:89] found id: ""
	I0414 12:10:58.099056  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.099065  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:10:58.099071  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:10:58.099138  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:10:58.139959  567375 cri.go:89] found id: ""
	I0414 12:10:58.139997  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.140009  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:10:58.140019  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:10:58.140089  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:10:58.178569  567375 cri.go:89] found id: ""
	I0414 12:10:58.178599  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.178611  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:10:58.178620  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:10:58.178681  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:10:58.216319  567375 cri.go:89] found id: ""
	I0414 12:10:58.216349  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.216358  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:10:58.216367  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:10:58.216418  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:10:58.249228  567375 cri.go:89] found id: ""
	I0414 12:10:58.249260  567375 logs.go:282] 0 containers: []
	W0414 12:10:58.249269  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:10:58.249279  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:10:58.249291  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:10:58.262917  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:10:58.262956  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:10:58.333225  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:10:58.333251  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:10:58.333268  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:10:58.424870  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:10:58.424922  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:10:58.468826  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:10:58.468873  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:01.037442  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:01.056605  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:01.056697  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:01.104347  567375 cri.go:89] found id: ""
	I0414 12:11:01.104378  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.104391  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:01.104399  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:01.104463  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:01.151728  567375 cri.go:89] found id: ""
	I0414 12:11:01.151763  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.151774  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:01.151782  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:01.151852  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:01.198883  567375 cri.go:89] found id: ""
	I0414 12:11:01.198926  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.198939  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:01.198947  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:01.199024  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:01.244148  567375 cri.go:89] found id: ""
	I0414 12:11:01.244242  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.244270  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:01.244287  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:01.244359  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:01.281935  567375 cri.go:89] found id: ""
	I0414 12:11:01.281971  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.281985  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:01.281994  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:01.282077  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:01.324678  567375 cri.go:89] found id: ""
	I0414 12:11:01.324711  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.324722  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:01.324730  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:01.324802  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:01.370255  567375 cri.go:89] found id: ""
	I0414 12:11:01.370351  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.370365  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:01.370373  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:01.370453  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:01.417906  567375 cri.go:89] found id: ""
	I0414 12:11:01.417942  567375 logs.go:282] 0 containers: []
	W0414 12:11:01.417955  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:01.417967  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:01.417983  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:01.485054  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:01.485110  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:01.502665  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:01.502696  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:01.600485  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:01.600524  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:01.600547  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:01.718120  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:01.718183  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:04.271459  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:04.284894  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:04.284964  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:04.318356  567375 cri.go:89] found id: ""
	I0414 12:11:04.318389  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.318398  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:04.318405  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:04.318475  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:04.352869  567375 cri.go:89] found id: ""
	I0414 12:11:04.352899  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.352908  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:04.352914  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:04.352979  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:04.389040  567375 cri.go:89] found id: ""
	I0414 12:11:04.389069  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.389082  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:04.389090  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:04.389151  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:04.423524  567375 cri.go:89] found id: ""
	I0414 12:11:04.423559  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.423572  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:04.423582  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:04.423648  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:04.458424  567375 cri.go:89] found id: ""
	I0414 12:11:04.458453  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.458461  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:04.458468  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:04.458534  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:04.493558  567375 cri.go:89] found id: ""
	I0414 12:11:04.493590  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.493603  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:04.493612  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:04.493678  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:04.527709  567375 cri.go:89] found id: ""
	I0414 12:11:04.527740  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.527748  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:04.527755  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:04.527809  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:04.565057  567375 cri.go:89] found id: ""
	I0414 12:11:04.565087  567375 logs.go:282] 0 containers: []
	W0414 12:11:04.565097  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:04.565110  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:04.565126  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:04.617127  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:04.617169  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:04.631044  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:04.631090  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:04.700304  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:04.700331  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:04.700348  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:04.777501  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:04.777544  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:07.315760  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:07.333471  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:07.333554  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:07.380216  567375 cri.go:89] found id: ""
	I0414 12:11:07.380243  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.380251  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:07.380258  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:07.380311  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:07.426876  567375 cri.go:89] found id: ""
	I0414 12:11:07.426914  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.426926  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:07.426935  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:07.427000  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:07.471855  567375 cri.go:89] found id: ""
	I0414 12:11:07.471891  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.471904  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:07.471913  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:07.471990  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:07.516623  567375 cri.go:89] found id: ""
	I0414 12:11:07.516653  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.516664  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:07.516672  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:07.516734  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:07.569573  567375 cri.go:89] found id: ""
	I0414 12:11:07.569611  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.569623  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:07.569632  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:07.569712  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:07.613266  567375 cri.go:89] found id: ""
	I0414 12:11:07.613298  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.613309  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:07.613318  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:07.613392  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:07.668386  567375 cri.go:89] found id: ""
	I0414 12:11:07.668415  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.668425  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:07.668433  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:07.668491  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:07.725183  567375 cri.go:89] found id: ""
	I0414 12:11:07.725216  567375 logs.go:282] 0 containers: []
	W0414 12:11:07.725229  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:07.725240  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:07.725256  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:07.788246  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:07.788296  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:07.805826  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:07.805881  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:07.903734  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:07.903758  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:07.903773  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:07.999467  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:07.999571  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:10.550627  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:10.564550  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:10.564635  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:10.602395  567375 cri.go:89] found id: ""
	I0414 12:11:10.602424  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.602433  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:10.602439  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:10.602513  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:10.635985  567375 cri.go:89] found id: ""
	I0414 12:11:10.636018  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.636040  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:10.636048  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:10.636116  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:10.676295  567375 cri.go:89] found id: ""
	I0414 12:11:10.676333  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.676342  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:10.676348  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:10.676402  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:10.711528  567375 cri.go:89] found id: ""
	I0414 12:11:10.711553  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.711561  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:10.711566  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:10.711627  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:10.754838  567375 cri.go:89] found id: ""
	I0414 12:11:10.754872  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.754886  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:10.754894  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:10.754966  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:10.792899  567375 cri.go:89] found id: ""
	I0414 12:11:10.792931  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.792939  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:10.792947  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:10.793015  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:10.827877  567375 cri.go:89] found id: ""
	I0414 12:11:10.827919  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.827932  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:10.827940  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:10.827996  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:10.860810  567375 cri.go:89] found id: ""
	I0414 12:11:10.860844  567375 logs.go:282] 0 containers: []
	W0414 12:11:10.860856  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:10.860870  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:10.860887  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:10.874143  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:10.874178  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:10.942773  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:10.942802  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:10.942821  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:11.022600  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:11.022648  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:11.067512  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:11.067542  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:13.631280  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:13.644495  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:13.644578  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:13.677319  567375 cri.go:89] found id: ""
	I0414 12:11:13.677356  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.677365  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:13.677372  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:13.677427  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:13.708548  567375 cri.go:89] found id: ""
	I0414 12:11:13.708575  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.708583  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:13.708589  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:13.708652  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:13.744612  567375 cri.go:89] found id: ""
	I0414 12:11:13.744644  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.744656  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:13.744665  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:13.744721  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:13.777606  567375 cri.go:89] found id: ""
	I0414 12:11:13.777640  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.777651  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:13.777659  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:13.777714  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:13.810909  567375 cri.go:89] found id: ""
	I0414 12:11:13.810942  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.810953  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:13.810961  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:13.811012  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:13.844285  567375 cri.go:89] found id: ""
	I0414 12:11:13.844312  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.844320  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:13.844326  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:13.844387  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:13.877010  567375 cri.go:89] found id: ""
	I0414 12:11:13.877042  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.877050  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:13.877057  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:13.877113  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:13.909501  567375 cri.go:89] found id: ""
	I0414 12:11:13.909538  567375 logs.go:282] 0 containers: []
	W0414 12:11:13.909551  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:13.909564  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:13.909581  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:13.991376  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:13.991416  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:14.029575  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:14.029617  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:14.079997  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:14.080054  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:14.096935  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:14.096974  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:14.174854  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:16.675543  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:16.690201  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:16.690274  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:16.722119  567375 cri.go:89] found id: ""
	I0414 12:11:16.722154  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.722161  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:16.722168  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:16.722217  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:16.755937  567375 cri.go:89] found id: ""
	I0414 12:11:16.755964  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.755972  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:16.755978  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:16.756027  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:16.789504  567375 cri.go:89] found id: ""
	I0414 12:11:16.789531  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.789539  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:16.789545  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:16.789596  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:16.840424  567375 cri.go:89] found id: ""
	I0414 12:11:16.840452  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.840460  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:16.840466  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:16.840524  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:16.872212  567375 cri.go:89] found id: ""
	I0414 12:11:16.872237  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.872245  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:16.872250  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:16.872303  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:16.905565  567375 cri.go:89] found id: ""
	I0414 12:11:16.905594  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.905603  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:16.905609  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:16.905668  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:16.938529  567375 cri.go:89] found id: ""
	I0414 12:11:16.938557  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.938566  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:16.938571  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:16.938623  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:16.971813  567375 cri.go:89] found id: ""
	I0414 12:11:16.971842  567375 logs.go:282] 0 containers: []
	W0414 12:11:16.971850  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:16.971860  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:16.971872  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:17.021290  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:17.021330  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:17.035239  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:17.035274  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:17.108250  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:17.108276  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:17.108291  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:17.184930  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:17.184967  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:19.725511  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:19.738838  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:19.738930  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:19.780291  567375 cri.go:89] found id: ""
	I0414 12:11:19.780327  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.780335  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:19.780341  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:19.780398  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:19.813229  567375 cri.go:89] found id: ""
	I0414 12:11:19.813264  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.813276  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:19.813284  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:19.813358  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:19.845061  567375 cri.go:89] found id: ""
	I0414 12:11:19.845103  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.845112  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:19.845127  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:19.845195  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:19.877842  567375 cri.go:89] found id: ""
	I0414 12:11:19.877870  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.877878  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:19.877885  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:19.877945  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:19.913510  567375 cri.go:89] found id: ""
	I0414 12:11:19.913540  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.913548  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:19.913555  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:19.913607  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:19.947466  567375 cri.go:89] found id: ""
	I0414 12:11:19.947499  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.947509  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:19.947516  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:19.947571  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:19.991305  567375 cri.go:89] found id: ""
	I0414 12:11:19.991336  567375 logs.go:282] 0 containers: []
	W0414 12:11:19.991344  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:19.991350  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:19.991395  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:20.026456  567375 cri.go:89] found id: ""
	I0414 12:11:20.026485  567375 logs.go:282] 0 containers: []
	W0414 12:11:20.026494  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:20.026503  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:20.026518  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:20.041043  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:20.041078  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:20.118653  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:20.118685  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:20.118701  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:20.198916  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:20.198958  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:20.238329  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:20.238362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:22.793258  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:22.807500  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:22.807583  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:22.844169  567375 cri.go:89] found id: ""
	I0414 12:11:22.844198  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.844210  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:22.844218  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:22.844283  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:22.883943  567375 cri.go:89] found id: ""
	I0414 12:11:22.883974  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.883986  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:22.883994  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:22.884063  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:22.918904  567375 cri.go:89] found id: ""
	I0414 12:11:22.918938  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.918950  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:22.918958  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:22.919015  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:22.959839  567375 cri.go:89] found id: ""
	I0414 12:11:22.959879  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.959892  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:22.959900  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:22.959966  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:23.002272  567375 cri.go:89] found id: ""
	I0414 12:11:23.002301  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.002313  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:23.002324  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:23.002392  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:23.037206  567375 cri.go:89] found id: ""
	I0414 12:11:23.037242  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.037254  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:23.037262  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:23.037339  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:23.073871  567375 cri.go:89] found id: ""
	I0414 12:11:23.073898  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.073907  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:23.073912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:23.073974  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:23.118533  567375 cri.go:89] found id: ""
	I0414 12:11:23.118571  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.118584  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:23.118597  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:23.118615  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:23.133894  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:23.133938  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:23.226964  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:23.226992  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:23.227010  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:23.352810  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:23.352855  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:23.402260  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:23.402297  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:25.957521  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:25.970937  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:25.971011  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:26.004566  567375 cri.go:89] found id: ""
	I0414 12:11:26.004601  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.004612  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:26.004620  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:26.004683  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:26.044984  567375 cri.go:89] found id: ""
	I0414 12:11:26.045016  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.045029  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:26.045037  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:26.045102  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:26.077283  567375 cri.go:89] found id: ""
	I0414 12:11:26.077316  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.077328  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:26.077336  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:26.077403  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:26.115453  567375 cri.go:89] found id: ""
	I0414 12:11:26.115478  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.115486  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:26.115493  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:26.115547  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:26.154963  567375 cri.go:89] found id: ""
	I0414 12:11:26.155002  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.155013  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:26.155021  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:26.155115  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:26.192115  567375 cri.go:89] found id: ""
	I0414 12:11:26.192148  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.192160  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:26.192169  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:26.192230  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:26.233202  567375 cri.go:89] found id: ""
	I0414 12:11:26.233236  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.233248  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:26.233256  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:26.233320  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:26.267547  567375 cri.go:89] found id: ""
	I0414 12:11:26.267579  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.267591  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:26.267602  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:26.267618  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:26.331976  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:26.332017  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:26.345893  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:26.345942  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:26.424476  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:26.424502  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:26.424518  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:26.513728  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:26.513763  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:29.057175  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:29.073805  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:29.073912  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:29.105549  567375 cri.go:89] found id: ""
	I0414 12:11:29.105578  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.105586  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:29.105594  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:29.105663  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:29.137613  567375 cri.go:89] found id: ""
	I0414 12:11:29.137643  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.137652  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:29.137658  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:29.137712  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:29.169687  567375 cri.go:89] found id: ""
	I0414 12:11:29.169726  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.169739  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:29.169752  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:29.169837  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:29.202019  567375 cri.go:89] found id: ""
	I0414 12:11:29.202054  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.202068  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:29.202077  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:29.202153  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:29.233953  567375 cri.go:89] found id: ""
	I0414 12:11:29.233991  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.234004  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:29.234014  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:29.234083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:29.267465  567375 cri.go:89] found id: ""
	I0414 12:11:29.267498  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.267511  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:29.267518  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:29.267585  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:29.301872  567375 cri.go:89] found id: ""
	I0414 12:11:29.301897  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.301905  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:29.301912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:29.301965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:29.336739  567375 cri.go:89] found id: ""
	I0414 12:11:29.336778  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.336792  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:29.336804  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:29.336821  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:29.386826  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:29.386867  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:29.402381  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:29.402411  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:29.471119  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:29.471146  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:29.471162  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:29.549103  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:29.549147  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:32.093046  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:32.111567  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:32.111656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:32.147814  567375 cri.go:89] found id: ""
	I0414 12:11:32.147845  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.147856  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:32.147865  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:32.147932  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:32.184293  567375 cri.go:89] found id: ""
	I0414 12:11:32.184327  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.184337  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:32.184345  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:32.184415  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:32.220242  567375 cri.go:89] found id: ""
	I0414 12:11:32.220283  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.220294  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:32.220302  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:32.220368  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:32.259235  567375 cri.go:89] found id: ""
	I0414 12:11:32.259274  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.259302  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:32.259320  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:32.259395  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:32.296349  567375 cri.go:89] found id: ""
	I0414 12:11:32.296383  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.296396  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:32.296404  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:32.296477  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:32.337046  567375 cri.go:89] found id: ""
	I0414 12:11:32.337078  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.337097  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:32.337106  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:32.337181  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:32.370809  567375 cri.go:89] found id: ""
	I0414 12:11:32.370841  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.370855  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:32.370864  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:32.370923  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:32.409908  567375 cri.go:89] found id: ""
	I0414 12:11:32.409936  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.409945  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:32.409955  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:32.409967  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:32.463974  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:32.464019  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:32.478989  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:32.479020  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:32.547623  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:32.547647  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:32.547659  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:32.635676  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:32.635716  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:35.172933  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:35.185360  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:35.185430  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:35.215587  567375 cri.go:89] found id: ""
	I0414 12:11:35.215619  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.215630  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:35.215639  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:35.215703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:35.246725  567375 cri.go:89] found id: ""
	I0414 12:11:35.246756  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.246769  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:35.246777  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:35.246842  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:35.277582  567375 cri.go:89] found id: ""
	I0414 12:11:35.277615  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.277627  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:35.277634  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:35.277703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:35.308852  567375 cri.go:89] found id: ""
	I0414 12:11:35.308884  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.308896  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:35.308904  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:35.308976  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:35.344753  567375 cri.go:89] found id: ""
	I0414 12:11:35.344785  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.344805  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:35.344813  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:35.344889  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:35.375334  567375 cri.go:89] found id: ""
	I0414 12:11:35.375369  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.375382  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:35.375392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:35.375461  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:35.407962  567375 cri.go:89] found id: ""
	I0414 12:11:35.407995  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.408003  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:35.408009  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:35.408072  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:35.438923  567375 cri.go:89] found id: ""
	I0414 12:11:35.438951  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.438959  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:35.438969  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:35.438982  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:35.451619  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:35.451655  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:35.515840  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:35.515872  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:35.515890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:35.591791  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:35.591838  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:35.629963  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:35.629994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.177510  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:38.189629  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:38.189703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:38.221893  567375 cri.go:89] found id: ""
	I0414 12:11:38.221930  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.221943  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:38.221952  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:38.222022  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:38.253207  567375 cri.go:89] found id: ""
	I0414 12:11:38.253238  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.253246  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:38.253254  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:38.253314  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:38.284207  567375 cri.go:89] found id: ""
	I0414 12:11:38.284237  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.284250  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:38.284259  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:38.284317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:38.316011  567375 cri.go:89] found id: ""
	I0414 12:11:38.316042  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.316055  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:38.316062  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:38.316129  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:38.346662  567375 cri.go:89] found id: ""
	I0414 12:11:38.346694  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.346706  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:38.346715  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:38.346775  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:38.378428  567375 cri.go:89] found id: ""
	I0414 12:11:38.378460  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.378468  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:38.378474  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:38.378527  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:38.409730  567375 cri.go:89] found id: ""
	I0414 12:11:38.409781  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.409793  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:38.409803  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:38.409880  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:38.441413  567375 cri.go:89] found id: ""
	I0414 12:11:38.441439  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.441448  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:38.441458  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:38.441471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.488672  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:38.488723  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:38.501037  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:38.501066  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:38.563620  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:38.563643  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:38.563660  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:38.637874  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:38.637912  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:41.174407  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:41.188283  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:41.188349  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:41.218963  567375 cri.go:89] found id: ""
	I0414 12:11:41.218995  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.219007  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:41.219015  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:41.219080  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:41.254974  567375 cri.go:89] found id: ""
	I0414 12:11:41.255007  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.255016  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:41.255022  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:41.255083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:41.291440  567375 cri.go:89] found id: ""
	I0414 12:11:41.291478  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.291490  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:41.291498  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:41.291566  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:41.326668  567375 cri.go:89] found id: ""
	I0414 12:11:41.326699  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.326710  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:41.326718  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:41.326788  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:41.358533  567375 cri.go:89] found id: ""
	I0414 12:11:41.358564  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.358577  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:41.358585  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:41.358656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:41.390847  567375 cri.go:89] found id: ""
	I0414 12:11:41.390892  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.390904  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:41.390916  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:41.390986  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:41.422995  567375 cri.go:89] found id: ""
	I0414 12:11:41.423029  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.423040  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:41.423047  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:41.423108  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:41.455329  567375 cri.go:89] found id: ""
	I0414 12:11:41.455359  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.455371  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:41.455384  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:41.455398  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:41.506257  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:41.506288  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:41.518836  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:41.518866  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:41.588714  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:41.588744  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:41.588764  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:41.672001  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:41.672039  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:44.216461  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:44.229313  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:44.229404  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:44.263625  567375 cri.go:89] found id: ""
	I0414 12:11:44.263662  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.263674  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:44.263682  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:44.263746  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:44.295775  567375 cri.go:89] found id: ""
	I0414 12:11:44.295815  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.295829  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:44.295836  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:44.295905  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:44.340233  567375 cri.go:89] found id: ""
	I0414 12:11:44.340270  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.340281  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:44.340289  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:44.340358  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:44.379008  567375 cri.go:89] found id: ""
	I0414 12:11:44.379046  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.379060  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:44.379070  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:44.379148  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:44.412114  567375 cri.go:89] found id: ""
	I0414 12:11:44.412151  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.412160  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:44.412166  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:44.412217  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:44.446940  567375 cri.go:89] found id: ""
	I0414 12:11:44.446967  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.446975  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:44.446982  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:44.447037  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:44.494452  567375 cri.go:89] found id: ""
	I0414 12:11:44.494491  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.494503  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:44.494511  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:44.494578  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:44.531111  567375 cri.go:89] found id: ""
	I0414 12:11:44.531158  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.531171  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:44.531185  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:44.531201  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:44.590909  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:44.590954  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:44.607376  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:44.607428  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:44.678145  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:44.678171  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:44.678190  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:44.758306  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:44.758351  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:47.316487  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.331760  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:47.331855  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:47.366754  567375 cri.go:89] found id: ""
	I0414 12:11:47.366790  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.366800  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:47.366807  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:47.366876  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:47.401386  567375 cri.go:89] found id: ""
	I0414 12:11:47.401418  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.401430  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:47.401438  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:47.401500  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:47.436630  567375 cri.go:89] found id: ""
	I0414 12:11:47.436672  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.436686  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:47.436695  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:47.436770  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:47.476106  567375 cri.go:89] found id: ""
	I0414 12:11:47.476140  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.476149  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:47.476156  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:47.476224  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:47.511092  567375 cri.go:89] found id: ""
	I0414 12:11:47.511117  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.511126  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:47.511134  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:47.511196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:47.543336  567375 cri.go:89] found id: ""
	I0414 12:11:47.543365  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.543375  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:47.543392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:47.543455  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:47.591258  567375 cri.go:89] found id: ""
	I0414 12:11:47.591282  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.591307  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:47.591315  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:47.591378  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:47.631828  567375 cri.go:89] found id: ""
	I0414 12:11:47.631858  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.631867  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:47.631888  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:47.631901  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:47.681449  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:47.681491  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:47.695772  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:47.695808  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:47.767246  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:47.767279  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:47.767312  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:47.849554  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:47.849608  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:50.386577  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:50.399173  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:50.399257  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:50.429909  567375 cri.go:89] found id: ""
	I0414 12:11:50.429938  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.429948  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:50.429956  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:50.430016  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:50.460948  567375 cri.go:89] found id: ""
	I0414 12:11:50.460981  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.460990  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:50.460996  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:50.461056  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:50.492141  567375 cri.go:89] found id: ""
	I0414 12:11:50.492172  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.492179  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:50.492186  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:50.492249  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:50.524274  567375 cri.go:89] found id: ""
	I0414 12:11:50.524301  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.524309  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:50.524317  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:50.524391  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:50.556554  567375 cri.go:89] found id: ""
	I0414 12:11:50.556583  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.556594  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:50.556601  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:50.556671  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:50.598848  567375 cri.go:89] found id: ""
	I0414 12:11:50.598878  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.598889  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:50.598898  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:50.598965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:50.629450  567375 cri.go:89] found id: ""
	I0414 12:11:50.629482  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.629491  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:50.629497  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:50.629550  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:50.660726  567375 cri.go:89] found id: ""
	I0414 12:11:50.660764  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.660778  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:50.660790  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:50.660809  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:50.711830  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:50.711868  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:50.724837  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:50.724869  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:50.787307  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:50.787340  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:50.787356  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:50.861702  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:50.861749  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.398783  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.412227  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:53.412304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:53.451115  567375 cri.go:89] found id: ""
	I0414 12:11:53.451149  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.451161  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:53.451170  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:53.451236  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:53.489749  567375 cri.go:89] found id: ""
	I0414 12:11:53.489783  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.489793  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:53.489801  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:53.489847  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:53.542102  567375 cri.go:89] found id: ""
	I0414 12:11:53.542122  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.542132  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:53.542140  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:53.542196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:53.582780  567375 cri.go:89] found id: ""
	I0414 12:11:53.582814  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.582827  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:53.582837  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:53.582900  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:53.616309  567375 cri.go:89] found id: ""
	I0414 12:11:53.616339  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.616355  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:53.616368  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:53.616429  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:53.650528  567375 cri.go:89] found id: ""
	I0414 12:11:53.650564  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.650578  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:53.650586  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:53.650658  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:53.687484  567375 cri.go:89] found id: ""
	I0414 12:11:53.687514  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.687525  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:53.687532  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:53.687593  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:53.729803  567375 cri.go:89] found id: ""
	I0414 12:11:53.729836  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.729848  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:53.729866  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:53.729883  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:53.787229  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:53.787281  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:53.803320  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:53.803362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:53.879853  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:53.879875  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:53.879890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:53.967553  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:53.967596  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:56.509793  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:56.527348  567375 kubeadm.go:597] duration metric: took 4m3.66529435s to restartPrimaryControlPlane
	W0414 12:11:56.527439  567375 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 12:11:56.527471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:11:57.129851  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:11:57.148604  567375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:57.161658  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:57.174834  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:57.174855  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:57.174903  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:57.187575  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:57.187656  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:57.200722  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:57.212875  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:57.212938  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:57.224425  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.234090  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:57.234150  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.244756  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:57.254119  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:57.254179  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:57.263664  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:11:57.335377  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:11:57.335465  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:11:57.480832  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:11:57.481011  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:11:57.481159  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:11:57.665866  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:11:57.667749  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:11:57.667857  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:11:57.667951  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:11:57.668066  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:11:57.668147  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:11:57.668265  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:11:57.668349  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:11:57.668440  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:11:57.668605  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:11:57.669216  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:11:57.669669  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:11:57.669739  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:11:57.669815  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:11:57.786691  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:11:58.140236  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:11:58.329890  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:11:58.422986  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:11:58.436920  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:11:58.438164  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:11:58.438254  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:11:58.590525  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:11:58.592980  567375 out.go:235]   - Booting up control plane ...
	I0414 12:11:58.593129  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:11:58.603522  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:11:58.603646  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:11:58.604814  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:11:58.609402  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:12:38.610672  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:12:38.611482  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:38.611732  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:43.612152  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:43.612389  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:53.612812  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:53.613076  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:13.613917  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:13.614151  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616094  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:53.616337  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616381  567375 kubeadm.go:310] 
	I0414 12:13:53.616467  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:13:53.616525  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:13:53.616535  567375 kubeadm.go:310] 
	I0414 12:13:53.616587  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:13:53.616626  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:13:53.616782  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:13:53.616803  567375 kubeadm.go:310] 
	I0414 12:13:53.616927  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:13:53.616975  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:13:53.617019  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:13:53.617040  567375 kubeadm.go:310] 
	I0414 12:13:53.617133  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:13:53.617207  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:13:53.617220  567375 kubeadm.go:310] 
	I0414 12:13:53.617379  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:13:53.617479  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:13:53.617552  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:13:53.617615  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:13:53.617621  567375 kubeadm.go:310] 
	I0414 12:13:53.618369  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:13:53.618463  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:13:53.618564  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 12:13:53.618776  567375 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 12:13:53.618845  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:13:54.079747  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:13:54.094028  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:13:54.103509  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:13:54.103536  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:13:54.103601  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:13:54.112305  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:13:54.112379  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:13:54.121095  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:13:54.129511  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:13:54.129569  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:13:54.138481  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.147165  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:13:54.147236  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.157633  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:13:54.167514  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:13:54.167580  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:13:54.177012  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:13:54.380519  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:15:50.310615  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:15:50.310709  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 12:15:50.312555  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:15:50.312621  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:15:50.312752  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:15:50.312914  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:15:50.313060  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:15:50.313152  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:15:50.316148  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:15:50.316217  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:15:50.316295  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:15:50.316380  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:15:50.316450  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:15:50.316548  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:15:50.316653  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:15:50.316746  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:15:50.316835  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:15:50.316942  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:15:50.317005  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:15:50.317040  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:15:50.317086  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:15:50.317133  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:15:50.317180  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:15:50.317230  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:15:50.317288  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:15:50.317415  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:15:50.317492  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:15:50.317525  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:15:50.317593  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:15:50.319132  567375 out.go:235]   - Booting up control plane ...
	I0414 12:15:50.319215  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:15:50.319298  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:15:50.319374  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:15:50.319478  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:15:50.319619  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:15:50.319660  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:15:50.319744  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.319956  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320056  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320241  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320326  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320504  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320593  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320780  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320883  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.321042  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.321060  567375 kubeadm.go:310] 
	I0414 12:15:50.321125  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:15:50.321180  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:15:50.321189  567375 kubeadm.go:310] 
	I0414 12:15:50.321243  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:15:50.321291  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:15:50.321409  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:15:50.321418  567375 kubeadm.go:310] 
	I0414 12:15:50.321529  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:15:50.321561  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:15:50.321589  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:15:50.321601  567375 kubeadm.go:310] 
	I0414 12:15:50.321700  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:15:50.321774  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:15:50.321780  567375 kubeadm.go:310] 
	I0414 12:15:50.321876  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:15:50.321967  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:15:50.322037  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:15:50.322099  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:15:50.322146  567375 kubeadm.go:310] 
	I0414 12:15:50.322192  567375 kubeadm.go:394] duration metric: took 7m57.509642242s to StartCluster
	I0414 12:15:50.322260  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:15:50.322317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:15:50.365321  567375 cri.go:89] found id: ""
	I0414 12:15:50.365360  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.365372  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:15:50.365388  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:15:50.365462  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:15:50.399917  567375 cri.go:89] found id: ""
	I0414 12:15:50.399956  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.399969  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:15:50.399977  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:15:50.400039  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:15:50.433841  567375 cri.go:89] found id: ""
	I0414 12:15:50.433889  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.433900  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:15:50.433906  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:15:50.433962  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:15:50.472959  567375 cri.go:89] found id: ""
	I0414 12:15:50.472993  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.473001  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:15:50.473008  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:15:50.473069  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:15:50.506397  567375 cri.go:89] found id: ""
	I0414 12:15:50.506434  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.506446  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:15:50.506454  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:15:50.506521  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:15:50.540645  567375 cri.go:89] found id: ""
	I0414 12:15:50.540672  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.540681  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:15:50.540687  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:15:50.540765  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:15:50.574232  567375 cri.go:89] found id: ""
	I0414 12:15:50.574263  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.574272  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:15:50.574278  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:15:50.574333  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:15:50.607014  567375 cri.go:89] found id: ""
	I0414 12:15:50.607044  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.607051  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:15:50.607063  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:15:50.607075  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:15:50.660430  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:15:50.660471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:15:50.676411  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:15:50.676454  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:15:50.782951  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:15:50.782981  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:15:50.782994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:15:50.886201  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:15:50.886250  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 12:15:50.923193  567375 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 12:15:50.923259  567375 out.go:270] * 
	* 
	W0414 12:15:50.923378  567375 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.923400  567375 out.go:270] * 
	* 
	W0414 12:15:50.924263  567375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 12:15:50.927535  567375 out.go:201] 
	W0414 12:15:50.928729  567375 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.928768  567375 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 12:15:50.928787  567375 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 12:15:50.930136  567375 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-071646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (245.630888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-071646 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-751466 image list                          | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| delete  | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| start   | -p newest-cni-104469 --memory=2200 --alsologtostderr   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-500740 image list                           | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| delete  | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| addons  | enable metrics-server -p newest-cni-104469             | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-104469                  | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-104469 --memory=2200 --alsologtostderr   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-477612                           | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	| image   | newest-cni-104469 image list                           | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:12 UTC | 14 Apr 25 12:12 UTC |
	| delete  | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:12 UTC | 14 Apr 25 12:12 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:11:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:11:21.120181  569647 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:11:21.120306  569647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:11:21.120314  569647 out.go:358] Setting ErrFile to fd 2...
	I0414 12:11:21.120321  569647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:11:21.120558  569647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:11:21.121141  569647 out.go:352] Setting JSON to false
	I0414 12:11:21.122099  569647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21232,"bootTime":1744611449,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:11:21.122165  569647 start.go:139] virtualization: kvm guest
	I0414 12:11:21.125125  569647 out.go:177] * [newest-cni-104469] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:11:21.126818  569647 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:11:21.126843  569647 notify.go:220] Checking for updates...
	I0414 12:11:21.129634  569647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:11:21.130894  569647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:21.132126  569647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:11:21.133333  569647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:11:21.134633  569647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:11:21.136670  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:21.137109  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.137207  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.153425  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0414 12:11:21.153887  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.154408  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.154435  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.154848  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.155038  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.155280  569647 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:11:21.155578  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.155618  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.171468  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0414 12:11:21.172092  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.172627  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.172657  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.173069  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.173264  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.212393  569647 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:11:21.213612  569647 start.go:297] selected driver: kvm2
	I0414 12:11:21.213629  569647 start.go:901] validating driver "kvm2" against &{Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:21.213754  569647 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:11:21.214497  569647 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:11:21.214593  569647 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:11:21.230852  569647 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:11:21.231270  569647 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 12:11:21.231336  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:21.231396  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:21.231436  569647 start.go:340] cluster config:
	{Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:21.231575  569647 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:11:21.233331  569647 out.go:177] * Starting "newest-cni-104469" primary control-plane node in "newest-cni-104469" cluster
	I0414 12:11:21.234770  569647 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:11:21.234813  569647 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:11:21.234825  569647 cache.go:56] Caching tarball of preloaded images
	I0414 12:11:21.234902  569647 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:11:21.234912  569647 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:11:21.235013  569647 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/config.json ...
	I0414 12:11:21.235220  569647 start.go:360] acquireMachinesLock for newest-cni-104469: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:11:21.235263  569647 start.go:364] duration metric: took 25.31µs to acquireMachinesLock for "newest-cni-104469"
	I0414 12:11:21.235277  569647 start.go:96] Skipping create...Using existing machine configuration
	I0414 12:11:21.235284  569647 fix.go:54] fixHost starting: 
	I0414 12:11:21.235603  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.235648  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.250885  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0414 12:11:21.251441  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.251920  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.251949  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.252312  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.252478  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.252628  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:21.254356  569647 fix.go:112] recreateIfNeeded on newest-cni-104469: state=Stopped err=<nil>
	I0414 12:11:21.254385  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	W0414 12:11:21.254563  569647 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 12:11:21.256588  569647 out.go:177] * Restarting existing kvm2 VM for "newest-cni-104469" ...
	I0414 12:11:20.198916  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:20.198958  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:20.238329  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:20.238362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:22.793258  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:22.807500  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:22.807583  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:22.844169  567375 cri.go:89] found id: ""
	I0414 12:11:22.844198  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.844210  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:22.844218  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:22.844283  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:22.883943  567375 cri.go:89] found id: ""
	I0414 12:11:22.883974  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.883986  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:22.883994  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:22.884063  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:22.918904  567375 cri.go:89] found id: ""
	I0414 12:11:22.918938  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.918950  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:22.918958  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:22.919015  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:22.959839  567375 cri.go:89] found id: ""
	I0414 12:11:22.959879  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.959892  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:22.959900  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:22.959966  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:23.002272  567375 cri.go:89] found id: ""
	I0414 12:11:23.002301  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.002313  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:23.002324  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:23.002392  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:23.037206  567375 cri.go:89] found id: ""
	I0414 12:11:23.037242  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.037254  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:23.037262  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:23.037339  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:23.073871  567375 cri.go:89] found id: ""
	I0414 12:11:23.073898  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.073907  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:23.073912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:23.073974  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:23.118533  567375 cri.go:89] found id: ""
	I0414 12:11:23.118571  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.118584  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:23.118597  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:23.118615  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:23.133894  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:23.133938  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:23.226964  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:23.226992  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:23.227010  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:23.352810  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:23.352855  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:23.402260  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:23.402297  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:21.257925  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Start
	I0414 12:11:21.258153  569647 main.go:141] libmachine: (newest-cni-104469) starting domain...
	I0414 12:11:21.258183  569647 main.go:141] libmachine: (newest-cni-104469) ensuring networks are active...
	I0414 12:11:21.259127  569647 main.go:141] libmachine: (newest-cni-104469) Ensuring network default is active
	I0414 12:11:21.259517  569647 main.go:141] libmachine: (newest-cni-104469) Ensuring network mk-newest-cni-104469 is active
	I0414 12:11:21.260074  569647 main.go:141] libmachine: (newest-cni-104469) getting domain XML...
	I0414 12:11:21.260776  569647 main.go:141] libmachine: (newest-cni-104469) creating domain...
	I0414 12:11:22.524766  569647 main.go:141] libmachine: (newest-cni-104469) waiting for IP...
	I0414 12:11:22.525521  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:22.526003  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:22.526073  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:22.526003  569682 retry.go:31] will retry after 307.883967ms: waiting for domain to come up
	I0414 12:11:22.835858  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:22.836463  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:22.836493  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:22.836420  569682 retry.go:31] will retry after 334.279409ms: waiting for domain to come up
	I0414 12:11:23.172155  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:23.172695  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:23.172727  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:23.172660  569682 retry.go:31] will retry after 299.810788ms: waiting for domain to come up
	I0414 12:11:23.474019  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:23.474427  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:23.474451  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:23.474416  569682 retry.go:31] will retry after 607.883043ms: waiting for domain to come up
	I0414 12:11:24.084316  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:24.084843  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:24.084887  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:24.084803  569682 retry.go:31] will retry after 665.362972ms: waiting for domain to come up
	I0414 12:11:24.751457  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:24.752025  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:24.752048  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:24.752008  569682 retry.go:31] will retry after 745.34954ms: waiting for domain to come up
	I0414 12:11:25.499392  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:25.544776  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:25.544821  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:25.544720  569682 retry.go:31] will retry after 908.451126ms: waiting for domain to come up
	I0414 12:11:25.957521  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:25.970937  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:25.971011  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:26.004566  567375 cri.go:89] found id: ""
	I0414 12:11:26.004601  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.004612  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:26.004620  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:26.004683  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:26.044984  567375 cri.go:89] found id: ""
	I0414 12:11:26.045016  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.045029  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:26.045037  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:26.045102  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:26.077283  567375 cri.go:89] found id: ""
	I0414 12:11:26.077316  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.077328  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:26.077336  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:26.077403  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:26.115453  567375 cri.go:89] found id: ""
	I0414 12:11:26.115478  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.115486  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:26.115493  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:26.115547  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:26.154963  567375 cri.go:89] found id: ""
	I0414 12:11:26.155002  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.155013  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:26.155021  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:26.155115  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:26.192115  567375 cri.go:89] found id: ""
	I0414 12:11:26.192148  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.192160  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:26.192169  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:26.192230  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:26.233202  567375 cri.go:89] found id: ""
	I0414 12:11:26.233236  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.233248  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:26.233256  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:26.233320  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:26.267547  567375 cri.go:89] found id: ""
	I0414 12:11:26.267579  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.267591  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:26.267602  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:26.267618  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:26.331976  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:26.332017  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:26.345893  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:26.345942  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:26.424476  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:26.424502  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:26.424518  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:26.513728  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:26.513763  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:29.057175  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:29.073805  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:29.073912  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:29.105549  567375 cri.go:89] found id: ""
	I0414 12:11:29.105578  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.105586  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:29.105594  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:29.105663  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:29.137613  567375 cri.go:89] found id: ""
	I0414 12:11:29.137643  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.137652  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:29.137658  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:29.137712  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:29.169687  567375 cri.go:89] found id: ""
	I0414 12:11:29.169726  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.169739  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:29.169752  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:29.169837  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:29.202019  567375 cri.go:89] found id: ""
	I0414 12:11:29.202054  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.202068  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:29.202077  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:29.202153  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:29.233953  567375 cri.go:89] found id: ""
	I0414 12:11:29.233991  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.234004  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:29.234014  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:29.234083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:29.267465  567375 cri.go:89] found id: ""
	I0414 12:11:29.267498  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.267511  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:29.267518  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:29.267585  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:29.301872  567375 cri.go:89] found id: ""
	I0414 12:11:29.301897  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.301905  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:29.301912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:29.301965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:29.336739  567375 cri.go:89] found id: ""
	I0414 12:11:29.336778  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.336792  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:29.336804  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:29.336821  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:29.386826  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:29.386867  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:29.402381  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:29.402411  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:29.471119  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:29.471146  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:29.471162  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:29.549103  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:29.549147  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:26.454591  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:26.455304  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:26.455337  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:26.455253  569682 retry.go:31] will retry after 971.962699ms: waiting for domain to come up
	I0414 12:11:27.428593  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:27.429086  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:27.429145  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:27.429061  569682 retry.go:31] will retry after 1.858464483s: waiting for domain to come up
	I0414 12:11:29.290212  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:29.290765  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:29.290794  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:29.290721  569682 retry.go:31] will retry after 1.729999321s: waiting for domain to come up
	I0414 12:11:31.022585  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:31.023131  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:31.023154  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:31.023104  569682 retry.go:31] will retry after 1.833182014s: waiting for domain to come up
	I0414 12:11:32.093046  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:32.111567  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:32.111656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:32.147814  567375 cri.go:89] found id: ""
	I0414 12:11:32.147845  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.147856  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:32.147865  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:32.147932  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:32.184293  567375 cri.go:89] found id: ""
	I0414 12:11:32.184327  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.184337  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:32.184345  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:32.184415  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:32.220242  567375 cri.go:89] found id: ""
	I0414 12:11:32.220283  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.220294  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:32.220302  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:32.220368  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:32.259235  567375 cri.go:89] found id: ""
	I0414 12:11:32.259274  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.259302  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:32.259320  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:32.259395  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:32.296349  567375 cri.go:89] found id: ""
	I0414 12:11:32.296383  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.296396  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:32.296404  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:32.296477  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:32.337046  567375 cri.go:89] found id: ""
	I0414 12:11:32.337078  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.337097  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:32.337106  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:32.337181  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:32.370809  567375 cri.go:89] found id: ""
	I0414 12:11:32.370841  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.370855  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:32.370864  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:32.370923  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:32.409908  567375 cri.go:89] found id: ""
	I0414 12:11:32.409936  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.409945  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:32.409955  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:32.409967  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:32.463974  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:32.464019  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:32.478989  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:32.479020  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:32.547623  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:32.547647  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:32.547659  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:32.635676  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:32.635716  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:32.858397  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:32.858993  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:32.859046  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:32.858997  569682 retry.go:31] will retry after 2.287767065s: waiting for domain to come up
	I0414 12:11:35.148507  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:35.149113  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:35.149168  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:35.149076  569682 retry.go:31] will retry after 3.709674414s: waiting for domain to come up
	I0414 12:11:35.172933  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:35.185360  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:35.185430  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:35.215587  567375 cri.go:89] found id: ""
	I0414 12:11:35.215619  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.215630  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:35.215639  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:35.215703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:35.246725  567375 cri.go:89] found id: ""
	I0414 12:11:35.246756  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.246769  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:35.246777  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:35.246842  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:35.277582  567375 cri.go:89] found id: ""
	I0414 12:11:35.277615  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.277627  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:35.277634  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:35.277703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:35.308852  567375 cri.go:89] found id: ""
	I0414 12:11:35.308884  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.308896  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:35.308904  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:35.308976  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:35.344753  567375 cri.go:89] found id: ""
	I0414 12:11:35.344785  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.344805  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:35.344813  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:35.344889  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:35.375334  567375 cri.go:89] found id: ""
	I0414 12:11:35.375369  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.375382  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:35.375392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:35.375461  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:35.407962  567375 cri.go:89] found id: ""
	I0414 12:11:35.407995  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.408003  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:35.408009  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:35.408072  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:35.438923  567375 cri.go:89] found id: ""
	I0414 12:11:35.438951  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.438959  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:35.438969  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:35.438982  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:35.451619  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:35.451655  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:35.515840  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:35.515872  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:35.515890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:35.591791  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:35.591838  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:35.629963  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:35.629994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.177510  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:38.189629  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:38.189703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:38.221893  567375 cri.go:89] found id: ""
	I0414 12:11:38.221930  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.221943  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:38.221952  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:38.222022  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:38.253207  567375 cri.go:89] found id: ""
	I0414 12:11:38.253238  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.253246  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:38.253254  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:38.253314  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:38.284207  567375 cri.go:89] found id: ""
	I0414 12:11:38.284237  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.284250  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:38.284259  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:38.284317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:38.316011  567375 cri.go:89] found id: ""
	I0414 12:11:38.316042  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.316055  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:38.316062  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:38.316129  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:38.346662  567375 cri.go:89] found id: ""
	I0414 12:11:38.346694  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.346706  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:38.346715  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:38.346775  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:38.378428  567375 cri.go:89] found id: ""
	I0414 12:11:38.378460  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.378468  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:38.378474  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:38.378527  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:38.409730  567375 cri.go:89] found id: ""
	I0414 12:11:38.409781  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.409793  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:38.409803  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:38.409880  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:38.441413  567375 cri.go:89] found id: ""
	I0414 12:11:38.441439  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.441448  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:38.441458  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:38.441471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.488672  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:38.488723  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:38.501037  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:38.501066  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:38.563620  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:38.563643  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:38.563660  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:38.637874  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:38.637912  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:38.861814  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.862326  569647 main.go:141] libmachine: (newest-cni-104469) found domain IP: 192.168.61.116
	I0414 12:11:38.862344  569647 main.go:141] libmachine: (newest-cni-104469) reserving static IP address...
	I0414 12:11:38.862354  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has current primary IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.862810  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "newest-cni-104469", mac: "52:54:00:db:0b:38", ip: "192.168.61.116"} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.862843  569647 main.go:141] libmachine: (newest-cni-104469) DBG | skip adding static IP to network mk-newest-cni-104469 - found existing host DHCP lease matching {name: "newest-cni-104469", mac: "52:54:00:db:0b:38", ip: "192.168.61.116"}
	I0414 12:11:38.862859  569647 main.go:141] libmachine: (newest-cni-104469) reserved static IP address 192.168.61.116 for domain newest-cni-104469
	I0414 12:11:38.862870  569647 main.go:141] libmachine: (newest-cni-104469) waiting for SSH...
	I0414 12:11:38.862881  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Getting to WaitForSSH function...
	I0414 12:11:38.865098  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.865437  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.865470  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.865529  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Using SSH client type: external
	I0414 12:11:38.865560  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa (-rw-------)
	I0414 12:11:38.865587  569647 main.go:141] libmachine: (newest-cni-104469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:11:38.865634  569647 main.go:141] libmachine: (newest-cni-104469) DBG | About to run SSH command:
	I0414 12:11:38.865654  569647 main.go:141] libmachine: (newest-cni-104469) DBG | exit 0
	I0414 12:11:38.991362  569647 main.go:141] libmachine: (newest-cni-104469) DBG | SSH cmd err, output: <nil>: 
	I0414 12:11:38.991738  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetConfigRaw
	I0414 12:11:38.992363  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:38.995348  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.995739  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.995763  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.996122  569647 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/config.json ...
	I0414 12:11:38.996361  569647 machine.go:93] provisionDockerMachine start ...
	I0414 12:11:38.996390  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:38.996627  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:38.998988  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.999418  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.999442  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.999619  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:38.999790  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:38.999942  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.000165  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.000352  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.000637  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.000650  569647 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 12:11:39.111566  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 12:11:39.111601  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.111888  569647 buildroot.go:166] provisioning hostname "newest-cni-104469"
	I0414 12:11:39.111921  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.112099  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.114831  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.115201  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.115231  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.115348  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.115518  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.115681  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.115834  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.116016  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.116227  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.116243  569647 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-104469 && echo "newest-cni-104469" | sudo tee /etc/hostname
	I0414 12:11:39.235982  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-104469
	
	I0414 12:11:39.236023  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.238767  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.239126  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.239154  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.239375  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.239553  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.239730  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.239848  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.239994  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.240236  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.240253  569647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-104469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-104469/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-104469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:11:39.359797  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:11:39.359831  569647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 12:11:39.359856  569647 buildroot.go:174] setting up certificates
	I0414 12:11:39.359871  569647 provision.go:84] configureAuth start
	I0414 12:11:39.359887  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.360227  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:39.363241  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.363632  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.363661  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.363808  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.366517  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.366878  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.366923  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.367093  569647 provision.go:143] copyHostCerts
	I0414 12:11:39.367154  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 12:11:39.367178  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 12:11:39.367259  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 12:11:39.367409  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 12:11:39.367422  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 12:11:39.367461  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 12:11:39.367564  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 12:11:39.367576  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 12:11:39.367609  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 12:11:39.367696  569647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.newest-cni-104469 san=[127.0.0.1 192.168.61.116 localhost minikube newest-cni-104469]
	I0414 12:11:39.512453  569647 provision.go:177] copyRemoteCerts
	I0414 12:11:39.512534  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:11:39.512575  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.515537  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.515909  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.515945  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.516071  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.516276  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.516443  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.516573  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:39.601480  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 12:11:39.625398  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:11:39.650802  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 12:11:39.675068  569647 provision.go:87] duration metric: took 315.17938ms to configureAuth
	I0414 12:11:39.675101  569647 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:11:39.675349  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:39.675432  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.678249  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.678617  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.678663  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.678831  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.679031  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.679193  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.679332  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.679487  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.679696  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.679712  569647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:11:39.899965  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:11:39.899998  569647 machine.go:96] duration metric: took 903.619003ms to provisionDockerMachine
	I0414 12:11:39.900014  569647 start.go:293] postStartSetup for "newest-cni-104469" (driver="kvm2")
	I0414 12:11:39.900028  569647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:11:39.900053  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:39.900415  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:11:39.900451  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.903052  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.903452  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.903483  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.903679  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.903870  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.904069  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.904241  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:39.989513  569647 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:11:39.993490  569647 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:11:39.993517  569647 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 12:11:39.993594  569647 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 12:11:39.993691  569647 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 12:11:39.993814  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 12:11:40.002553  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:11:40.024737  569647 start.go:296] duration metric: took 124.706191ms for postStartSetup
	I0414 12:11:40.024779  569647 fix.go:56] duration metric: took 18.789494511s for fixHost
	I0414 12:11:40.024800  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.027427  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.027719  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.027751  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.027915  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.028129  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.028292  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.028414  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.028579  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:40.028888  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:40.028904  569647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:11:40.135768  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744632700.105724681
	
	I0414 12:11:40.135795  569647 fix.go:216] guest clock: 1744632700.105724681
	I0414 12:11:40.135802  569647 fix.go:229] Guest: 2025-04-14 12:11:40.105724681 +0000 UTC Remote: 2025-04-14 12:11:40.024782852 +0000 UTC m=+18.941186859 (delta=80.941829ms)
	I0414 12:11:40.135840  569647 fix.go:200] guest clock delta is within tolerance: 80.941829ms
	I0414 12:11:40.135845  569647 start.go:83] releasing machines lock for "newest-cni-104469", held for 18.900572975s
	I0414 12:11:40.135867  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.136110  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:40.139092  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.139498  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.139528  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.139729  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140213  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140375  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140494  569647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:11:40.140550  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.140597  569647 ssh_runner.go:195] Run: cat /version.json
	I0414 12:11:40.140620  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.143168  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143464  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143523  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.143545  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143717  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.143928  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.143941  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.143958  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.144105  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.144137  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.144273  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:40.144422  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.144572  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.144723  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:40.253928  569647 ssh_runner.go:195] Run: systemctl --version
	I0414 12:11:40.259508  569647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:11:40.399347  569647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:11:40.404975  569647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:11:40.405068  569647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:11:40.420258  569647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:11:40.420289  569647 start.go:495] detecting cgroup driver to use...
	I0414 12:11:40.420369  569647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:11:40.436755  569647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:11:40.450152  569647 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:11:40.450245  569647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:11:40.464139  569647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:11:40.477505  569647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:11:40.591544  569647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:11:40.762510  569647 docker.go:233] disabling docker service ...
	I0414 12:11:40.762590  569647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:11:40.777138  569647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:11:40.790390  569647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:11:40.907968  569647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:11:41.012941  569647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:11:41.026846  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:11:41.044129  569647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 12:11:41.044224  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.054103  569647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:11:41.054180  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.063996  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.073838  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.083706  569647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:11:41.093759  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.103550  569647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.118834  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.128734  569647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:11:41.137754  569647 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:11:41.137910  569647 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:11:41.150890  569647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:11:41.160130  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:41.274669  569647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:11:41.366746  569647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:11:41.366838  569647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:11:41.371400  569647 start.go:563] Will wait 60s for crictl version
	I0414 12:11:41.371472  569647 ssh_runner.go:195] Run: which crictl
	I0414 12:11:41.375071  569647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:11:41.414018  569647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:11:41.414145  569647 ssh_runner.go:195] Run: crio --version
	I0414 12:11:41.441601  569647 ssh_runner.go:195] Run: crio --version
	I0414 12:11:41.470278  569647 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 12:11:41.471736  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:41.474769  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:41.475176  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:41.475208  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:41.475465  569647 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 12:11:41.480427  569647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:11:41.494695  569647 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0414 12:11:41.174407  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:41.188283  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:41.188349  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:41.218963  567375 cri.go:89] found id: ""
	I0414 12:11:41.218995  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.219007  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:41.219015  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:41.219080  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:41.254974  567375 cri.go:89] found id: ""
	I0414 12:11:41.255007  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.255016  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:41.255022  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:41.255083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:41.291440  567375 cri.go:89] found id: ""
	I0414 12:11:41.291478  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.291490  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:41.291498  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:41.291566  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:41.326668  567375 cri.go:89] found id: ""
	I0414 12:11:41.326699  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.326710  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:41.326718  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:41.326788  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:41.358533  567375 cri.go:89] found id: ""
	I0414 12:11:41.358564  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.358577  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:41.358585  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:41.358656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:41.390847  567375 cri.go:89] found id: ""
	I0414 12:11:41.390892  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.390904  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:41.390916  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:41.390986  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:41.422995  567375 cri.go:89] found id: ""
	I0414 12:11:41.423029  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.423040  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:41.423047  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:41.423108  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:41.455329  567375 cri.go:89] found id: ""
	I0414 12:11:41.455359  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.455371  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:41.455384  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:41.455398  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:41.506257  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:41.506288  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:41.518836  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:41.518866  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:41.588714  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:41.588744  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:41.588764  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:41.672001  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:41.672039  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:44.216461  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:44.229313  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:44.229404  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:44.263625  567375 cri.go:89] found id: ""
	I0414 12:11:44.263662  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.263674  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:44.263682  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:44.263746  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:44.295775  567375 cri.go:89] found id: ""
	I0414 12:11:44.295815  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.295829  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:44.295836  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:44.295905  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:44.340233  567375 cri.go:89] found id: ""
	I0414 12:11:44.340270  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.340281  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:44.340289  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:44.340358  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:44.379008  567375 cri.go:89] found id: ""
	I0414 12:11:44.379046  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.379060  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:44.379070  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:44.379148  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:44.412114  567375 cri.go:89] found id: ""
	I0414 12:11:44.412151  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.412160  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:44.412166  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:44.412217  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:44.446940  567375 cri.go:89] found id: ""
	I0414 12:11:44.446967  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.446975  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:44.446982  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:44.447037  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:44.494452  567375 cri.go:89] found id: ""
	I0414 12:11:44.494491  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.494503  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:44.494511  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:44.494578  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:44.531111  567375 cri.go:89] found id: ""
	I0414 12:11:44.531158  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.531171  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:44.531185  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:44.531201  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:44.590909  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:44.590954  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:44.607376  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:44.607428  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:44.678145  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:44.678171  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:44.678190  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:44.758306  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:44.758351  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:41.495951  569647 kubeadm.go:883] updating cluster {Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-1
04469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:11:41.496082  569647 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:11:41.496147  569647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:11:41.537224  569647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 12:11:41.537318  569647 ssh_runner.go:195] Run: which lz4
	I0414 12:11:41.541348  569647 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:11:41.545374  569647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:11:41.545417  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 12:11:42.748200  569647 crio.go:462] duration metric: took 1.206904316s to copy over tarball
	I0414 12:11:42.748273  569647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:11:44.940244  569647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191944178s)
	I0414 12:11:44.940275  569647 crio.go:469] duration metric: took 2.192045159s to extract the tarball
	I0414 12:11:44.940282  569647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:11:44.976846  569647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:11:45.017205  569647 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 12:11:45.017232  569647 cache_images.go:84] Images are preloaded, skipping loading
	I0414 12:11:45.017240  569647 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.32.2 crio true true} ...
	I0414 12:11:45.017357  569647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-104469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:11:45.017443  569647 ssh_runner.go:195] Run: crio config
	I0414 12:11:45.066045  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:45.066074  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:45.066086  569647 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0414 12:11:45.066108  569647 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-104469 NodeName:newest-cni-104469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 12:11:45.066250  569647 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-104469"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.116"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:11:45.066317  569647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 12:11:45.075884  569647 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:11:45.075969  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:11:45.084969  569647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0414 12:11:45.100691  569647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:11:45.116384  569647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0414 12:11:45.131922  569647 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0414 12:11:45.135512  569647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:11:45.146978  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:45.261463  569647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:11:45.279128  569647 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469 for IP: 192.168.61.116
	I0414 12:11:45.279159  569647 certs.go:194] generating shared ca certs ...
	I0414 12:11:45.279178  569647 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:45.279434  569647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 12:11:45.279505  569647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 12:11:45.279520  569647 certs.go:256] generating profile certs ...
	I0414 12:11:45.279642  569647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/client.key
	I0414 12:11:45.279729  569647 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.key.774aa14a
	I0414 12:11:45.279810  569647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.key
	I0414 12:11:45.279954  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 12:11:45.279996  569647 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 12:11:45.280007  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:11:45.280039  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:11:45.280076  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:11:45.280105  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 12:11:45.280168  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:11:45.280847  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:11:45.314145  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 12:11:45.338428  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:11:45.370752  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:11:45.397988  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 12:11:45.425777  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 12:11:45.448378  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:11:45.472600  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 12:11:45.495315  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 12:11:45.517788  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 12:11:45.541189  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:11:45.566831  569647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:11:45.584065  569647 ssh_runner.go:195] Run: openssl version
	I0414 12:11:45.589870  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:11:45.600360  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.604736  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.604808  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.610342  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:11:45.620182  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 12:11:45.630441  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.634658  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.634747  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.640599  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 12:11:45.651269  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 12:11:45.662116  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.666678  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.666779  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.672334  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 12:11:45.682554  569647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:11:45.686828  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 12:11:45.693016  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 12:11:45.698975  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 12:11:45.704832  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 12:11:45.710682  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 12:11:45.716357  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 12:11:45.722031  569647 kubeadm.go:392] StartCluster: {Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-1044
69 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:45.722164  569647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:11:45.722256  569647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:11:45.762047  569647 cri.go:89] found id: ""
	I0414 12:11:45.762149  569647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:11:45.772159  569647 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 12:11:45.772188  569647 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 12:11:45.772238  569647 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 12:11:45.781693  569647 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 12:11:45.782599  569647 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-104469" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:45.782917  569647 kubeconfig.go:62] /home/jenkins/minikube-integration/20534-503273/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-104469" cluster setting kubeconfig missing "newest-cni-104469" context setting]
	I0414 12:11:45.783560  569647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:45.785561  569647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 12:11:45.795019  569647 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0414 12:11:45.795061  569647 kubeadm.go:1160] stopping kube-system containers ...
	I0414 12:11:45.795073  569647 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 12:11:45.795121  569647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:11:45.832759  569647 cri.go:89] found id: ""
	I0414 12:11:45.832853  569647 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 12:11:45.849887  569647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:45.860004  569647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:45.860044  569647 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:45.860105  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:45.869219  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:45.869287  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:45.878859  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:45.890576  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:45.890661  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:45.909668  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:45.918990  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:45.919080  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:45.928230  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:45.936683  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:45.936746  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:45.945411  569647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:45.954641  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.058335  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:47.316487  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.331760  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:47.331855  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:47.366754  567375 cri.go:89] found id: ""
	I0414 12:11:47.366790  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.366800  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:47.366807  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:47.366876  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:47.401386  567375 cri.go:89] found id: ""
	I0414 12:11:47.401418  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.401430  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:47.401438  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:47.401500  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:47.436630  567375 cri.go:89] found id: ""
	I0414 12:11:47.436672  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.436686  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:47.436695  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:47.436770  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:47.476106  567375 cri.go:89] found id: ""
	I0414 12:11:47.476140  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.476149  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:47.476156  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:47.476224  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:47.511092  567375 cri.go:89] found id: ""
	I0414 12:11:47.511117  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.511126  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:47.511134  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:47.511196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:47.543336  567375 cri.go:89] found id: ""
	I0414 12:11:47.543365  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.543375  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:47.543392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:47.543455  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:47.591258  567375 cri.go:89] found id: ""
	I0414 12:11:47.591282  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.591307  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:47.591315  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:47.591378  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:47.631828  567375 cri.go:89] found id: ""
	I0414 12:11:47.631858  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.631867  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:47.631888  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:47.631901  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:47.681449  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:47.681491  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:47.695772  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:47.695808  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:47.767246  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:47.767279  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:47.767312  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:47.849554  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:47.849608  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:46.644225  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.835780  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.909528  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:47.008035  569647 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:11:47.008154  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.508435  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:48.008446  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:48.509090  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:49.008987  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:49.097677  569647 api_server.go:72] duration metric: took 2.08963857s to wait for apiserver process to appear ...
	I0414 12:11:49.097719  569647 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:11:49.097747  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:49.098477  569647 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0414 12:11:49.597917  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:51.914295  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 12:11:51.914332  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 12:11:51.914351  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:51.950360  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 12:11:51.950390  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 12:11:52.098794  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:52.144939  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 12:11:52.144974  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 12:11:52.598644  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:52.602917  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 12:11:52.602941  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 12:11:53.098719  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:53.103810  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0414 12:11:53.110249  569647 api_server.go:141] control plane version: v1.32.2
	I0414 12:11:53.110286  569647 api_server.go:131] duration metric: took 4.012559017s to wait for apiserver health ...
	I0414 12:11:53.110296  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:53.110304  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:53.112437  569647 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:11:53.113774  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:11:53.123553  569647 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:11:53.140406  569647 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:11:53.150738  569647 system_pods.go:59] 8 kube-system pods found
	I0414 12:11:53.150784  569647 system_pods.go:61] "coredns-668d6bf9bc-w4bzb" [e6206551-e8cd-4eec-9fe0-d1e6a8ce92c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 12:11:53.150794  569647 system_pods.go:61] "etcd-newest-cni-104469" [2ee08cb2-71cf-4277-a620-2e489f3f2446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 12:11:53.150802  569647 system_pods.go:61] "kube-apiserver-newest-cni-104469" [14f7a41a-018f-4f66-bda8-f372f0bc5064] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 12:11:53.150816  569647 system_pods.go:61] "kube-controller-manager-newest-cni-104469" [178f361d-e24b-4bfb-a916-3507cd011e3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 12:11:53.150824  569647 system_pods.go:61] "kube-proxy-tt6kz" [3ef9ada6-36d4-4ba9-92e5-e3542317f468] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 12:11:53.150833  569647 system_pods.go:61] "kube-scheduler-newest-cni-104469" [43010056-fcfe-4ef5-a834-9651e3123276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 12:11:53.150837  569647 system_pods.go:61] "metrics-server-f79f97bbb-vrl2k" [6cec0337-8996-4c11-86b6-be3f25e2eeda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:11:53.150843  569647 system_pods.go:61] "storage-provisioner" [3998c14d-608d-43b5-a6b9-972918ac6675] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 12:11:53.150850  569647 system_pods.go:74] duration metric: took 10.421128ms to wait for pod list to return data ...
	I0414 12:11:53.150859  569647 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:11:53.153603  569647 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:11:53.153640  569647 node_conditions.go:123] node cpu capacity is 2
	I0414 12:11:53.153659  569647 node_conditions.go:105] duration metric: took 2.796154ms to run NodePressure ...
	I0414 12:11:53.153685  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:53.451414  569647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:11:53.463015  569647 ops.go:34] apiserver oom_adj: -16
	I0414 12:11:53.463044  569647 kubeadm.go:597] duration metric: took 7.690847961s to restartPrimaryControlPlane
	I0414 12:11:53.463065  569647 kubeadm.go:394] duration metric: took 7.741049865s to StartCluster
	I0414 12:11:53.463091  569647 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:53.463196  569647 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:53.464309  569647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:53.464608  569647 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:11:53.464788  569647 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:11:53.464894  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:53.464933  569647 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-104469"
	I0414 12:11:53.464954  569647 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-104469"
	I0414 12:11:53.464960  569647 addons.go:69] Setting default-storageclass=true in profile "newest-cni-104469"
	W0414 12:11:53.464967  569647 addons.go:247] addon storage-provisioner should already be in state true
	I0414 12:11:53.464983  569647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-104469"
	I0414 12:11:53.464984  569647 addons.go:69] Setting metrics-server=true in profile "newest-cni-104469"
	I0414 12:11:53.465025  569647 addons.go:238] Setting addon metrics-server=true in "newest-cni-104469"
	W0414 12:11:53.465050  569647 addons.go:247] addon metrics-server should already be in state true
	I0414 12:11:53.465049  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.464964  569647 addons.go:69] Setting dashboard=true in profile "newest-cni-104469"
	I0414 12:11:53.465086  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.465109  569647 addons.go:238] Setting addon dashboard=true in "newest-cni-104469"
	W0414 12:11:53.465120  569647 addons.go:247] addon dashboard should already be in state true
	I0414 12:11:53.465150  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.465445  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465486  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465499  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465445  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465525  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465568  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465599  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465623  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.467469  569647 out.go:177] * Verifying Kubernetes components...
	I0414 12:11:53.468679  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:53.485803  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42723
	I0414 12:11:53.486041  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0414 12:11:53.486193  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0414 12:11:53.486206  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0414 12:11:53.486456  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486715  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486835  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486935  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.487152  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487175  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487310  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487338  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487538  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.487539  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487605  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487814  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487837  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487880  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.487991  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488089  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488232  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488630  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488634  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488690  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.488754  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.488797  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488837  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.507852  569647 addons.go:238] Setting addon default-storageclass=true in "newest-cni-104469"
	W0414 12:11:53.507881  569647 addons.go:247] addon default-storageclass should already be in state true
	I0414 12:11:53.507917  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.508343  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.508402  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.511624  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0414 12:11:53.512195  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.512735  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.512757  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.513140  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.513333  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.515362  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.517751  569647 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 12:11:53.518934  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 12:11:53.518959  569647 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 12:11:53.518984  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.524940  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.525463  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.525484  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.525902  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.526120  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.526292  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.526446  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.528938  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0414 12:11:53.529570  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.530010  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.530027  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.530494  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.530752  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.531020  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37235
	I0414 12:11:53.531557  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.531663  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0414 12:11:53.532241  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.532451  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.532470  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.533132  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.533152  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.533484  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.533633  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.533647  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.534060  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.534624  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.534675  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.535130  569647 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:11:53.535462  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.536805  569647 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:11:53.536831  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:11:53.536852  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.537516  569647 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 12:11:53.538836  569647 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 12:11:50.386577  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:50.399173  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:50.399257  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:50.429909  567375 cri.go:89] found id: ""
	I0414 12:11:50.429938  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.429948  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:50.429956  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:50.430016  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:50.460948  567375 cri.go:89] found id: ""
	I0414 12:11:50.460981  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.460990  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:50.460996  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:50.461056  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:50.492141  567375 cri.go:89] found id: ""
	I0414 12:11:50.492172  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.492179  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:50.492186  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:50.492249  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:50.524274  567375 cri.go:89] found id: ""
	I0414 12:11:50.524301  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.524309  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:50.524317  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:50.524391  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:50.556554  567375 cri.go:89] found id: ""
	I0414 12:11:50.556583  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.556594  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:50.556601  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:50.556671  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:50.598848  567375 cri.go:89] found id: ""
	I0414 12:11:50.598878  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.598889  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:50.598898  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:50.598965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:50.629450  567375 cri.go:89] found id: ""
	I0414 12:11:50.629482  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.629491  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:50.629497  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:50.629550  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:50.660726  567375 cri.go:89] found id: ""
	I0414 12:11:50.660764  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.660778  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:50.660790  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:50.660809  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:50.711830  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:50.711868  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:50.724837  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:50.724869  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:50.787307  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:50.787340  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:50.787356  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:50.861702  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:50.861749  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.398783  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.412227  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:53.412304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:53.451115  567375 cri.go:89] found id: ""
	I0414 12:11:53.451149  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.451161  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:53.451170  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:53.451236  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:53.489749  567375 cri.go:89] found id: ""
	I0414 12:11:53.489783  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.489793  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:53.489801  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:53.489847  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:53.542102  567375 cri.go:89] found id: ""
	I0414 12:11:53.542122  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.542132  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:53.542140  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:53.542196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:53.582780  567375 cri.go:89] found id: ""
	I0414 12:11:53.582814  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.582827  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:53.582837  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:53.582900  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:53.616309  567375 cri.go:89] found id: ""
	I0414 12:11:53.616339  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.616355  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:53.616368  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:53.616429  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:53.650528  567375 cri.go:89] found id: ""
	I0414 12:11:53.650564  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.650578  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:53.650586  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:53.650658  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:53.687484  567375 cri.go:89] found id: ""
	I0414 12:11:53.687514  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.687525  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:53.687532  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:53.687593  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:53.729803  567375 cri.go:89] found id: ""
	I0414 12:11:53.729836  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.729848  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:53.729866  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:53.729883  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:53.787229  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:53.787281  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:53.803320  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:53.803362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:53.879853  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:53.879875  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:53.879890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:53.967553  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:53.967596  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.539970  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 12:11:53.540182  569647 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 12:11:53.540212  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.541047  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.541429  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.541524  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.541675  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.541851  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.542530  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.542705  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.543948  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.544373  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.544393  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.544612  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.544830  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.544993  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.545159  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.558318  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0414 12:11:53.558838  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.559474  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.559499  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.560235  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.560534  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.562619  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.562979  569647 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:11:53.562998  569647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:11:53.563018  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.566082  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.566663  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.566691  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.566774  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.566975  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.567140  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.567309  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.653877  569647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:11:53.673348  569647 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:11:53.673443  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.686796  569647 api_server.go:72] duration metric: took 222.138072ms to wait for apiserver process to appear ...
	I0414 12:11:53.686829  569647 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:11:53.686850  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:53.691583  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0414 12:11:53.692740  569647 api_server.go:141] control plane version: v1.32.2
	I0414 12:11:53.692773  569647 api_server.go:131] duration metric: took 5.935428ms to wait for apiserver health ...
	I0414 12:11:53.692785  569647 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:11:53.696080  569647 system_pods.go:59] 8 kube-system pods found
	I0414 12:11:53.696119  569647 system_pods.go:61] "coredns-668d6bf9bc-w4bzb" [e6206551-e8cd-4eec-9fe0-d1e6a8ce92c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 12:11:53.696131  569647 system_pods.go:61] "etcd-newest-cni-104469" [2ee08cb2-71cf-4277-a620-2e489f3f2446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 12:11:53.696144  569647 system_pods.go:61] "kube-apiserver-newest-cni-104469" [14f7a41a-018f-4f66-bda8-f372f0bc5064] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 12:11:53.696152  569647 system_pods.go:61] "kube-controller-manager-newest-cni-104469" [178f361d-e24b-4bfb-a916-3507cd011e3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 12:11:53.696166  569647 system_pods.go:61] "kube-proxy-tt6kz" [3ef9ada6-36d4-4ba9-92e5-e3542317f468] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 12:11:53.696174  569647 system_pods.go:61] "kube-scheduler-newest-cni-104469" [43010056-fcfe-4ef5-a834-9651e3123276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 12:11:53.696185  569647 system_pods.go:61] "metrics-server-f79f97bbb-vrl2k" [6cec0337-8996-4c11-86b6-be3f25e2eeda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:11:53.696194  569647 system_pods.go:61] "storage-provisioner" [3998c14d-608d-43b5-a6b9-972918ac6675] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 12:11:53.696206  569647 system_pods.go:74] duration metric: took 3.412863ms to wait for pod list to return data ...
	I0414 12:11:53.696220  569647 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:11:53.698670  569647 default_sa.go:45] found service account: "default"
	I0414 12:11:53.698689  569647 default_sa.go:55] duration metric: took 2.459718ms for default service account to be created ...
	I0414 12:11:53.698700  569647 kubeadm.go:582] duration metric: took 234.05034ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 12:11:53.698722  569647 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:11:53.700885  569647 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:11:53.700905  569647 node_conditions.go:123] node cpu capacity is 2
	I0414 12:11:53.700921  569647 node_conditions.go:105] duration metric: took 2.19269ms to run NodePressure ...
	I0414 12:11:53.700934  569647 start.go:241] waiting for startup goroutines ...
	I0414 12:11:53.730838  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:11:53.789427  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:11:53.829021  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 12:11:53.829048  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 12:11:53.841313  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 12:11:53.841347  569647 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 12:11:53.907638  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 12:11:53.907670  569647 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 12:11:53.908006  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 12:11:53.908053  569647 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 12:11:53.983378  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:11:53.983415  569647 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 12:11:54.086187  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 12:11:54.086215  569647 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 12:11:54.087358  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:11:54.186051  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 12:11:54.186081  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 12:11:54.280182  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 12:11:54.280213  569647 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 12:11:54.380765  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 12:11:54.380797  569647 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 12:11:54.389761  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.389795  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.390159  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.390186  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.390206  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.390216  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.390216  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:54.390489  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.390507  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.399373  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.399399  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.399699  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.399801  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.399744  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:54.445995  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 12:11:54.446027  569647 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 12:11:54.478086  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 12:11:54.478117  569647 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 12:11:54.547414  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 12:11:54.547444  569647 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 12:11:54.635136  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 12:11:55.810138  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020663207s)
	I0414 12:11:55.810204  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.810217  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.810539  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.810567  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.810584  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.810593  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.810853  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.810870  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.975467  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.888057826s)
	I0414 12:11:55.975538  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.975556  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.975946  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.975975  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.975977  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:55.975985  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.976010  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.976328  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:55.976401  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.976418  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.976435  569647 addons.go:479] Verifying addon metrics-server=true in "newest-cni-104469"
	I0414 12:11:56.493194  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.858004612s)
	I0414 12:11:56.493258  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:56.493276  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:56.493618  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:56.493637  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:56.493654  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:56.493669  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:56.493684  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:56.493941  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:56.493958  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:56.495184  569647 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-104469 addons enable metrics-server
	
	I0414 12:11:56.496411  569647 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 12:11:56.497675  569647 addons.go:514] duration metric: took 3.032922178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 12:11:56.497730  569647 start.go:246] waiting for cluster config update ...
	I0414 12:11:56.497749  569647 start.go:255] writing updated cluster config ...
	I0414 12:11:56.498155  569647 ssh_runner.go:195] Run: rm -f paused
	I0414 12:11:56.560467  569647 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:11:56.562298  569647 out.go:177] * Done! kubectl is now configured to use "newest-cni-104469" cluster and "default" namespace by default
	I0414 12:11:56.509793  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:56.527348  567375 kubeadm.go:597] duration metric: took 4m3.66529435s to restartPrimaryControlPlane
	W0414 12:11:56.527439  567375 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 12:11:56.527471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:11:57.129851  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:11:57.148604  567375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:57.161658  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:57.174834  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:57.174855  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:57.174903  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:57.187575  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:57.187656  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:57.200722  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:57.212875  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:57.212938  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:57.224425  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.234090  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:57.234150  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.244756  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:57.254119  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:57.254179  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:57.263664  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:11:57.335377  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:11:57.335465  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:11:57.480832  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:11:57.481011  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:11:57.481159  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:11:57.665866  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:11:57.667749  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:11:57.667857  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:11:57.667951  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:11:57.668066  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:11:57.668147  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:11:57.668265  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:11:57.668349  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:11:57.668440  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:11:57.668605  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:11:57.669216  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:11:57.669669  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:11:57.669739  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:11:57.669815  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:11:57.786691  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:11:58.140236  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:11:58.329890  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:11:58.422986  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:11:58.436920  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:11:58.438164  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:11:58.438254  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:11:58.590525  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:11:58.592980  567375 out.go:235]   - Booting up control plane ...
	I0414 12:11:58.593129  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:11:58.603522  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:11:58.603646  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:11:58.604814  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:11:58.609402  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:12:38.610672  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:12:38.611482  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:38.611732  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:43.612152  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:43.612389  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:53.612812  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:53.613076  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:13.613917  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:13.614151  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616094  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:53.616337  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616381  567375 kubeadm.go:310] 
	I0414 12:13:53.616467  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:13:53.616525  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:13:53.616535  567375 kubeadm.go:310] 
	I0414 12:13:53.616587  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:13:53.616626  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:13:53.616782  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:13:53.616803  567375 kubeadm.go:310] 
	I0414 12:13:53.616927  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:13:53.616975  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:13:53.617019  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:13:53.617040  567375 kubeadm.go:310] 
	I0414 12:13:53.617133  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:13:53.617207  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:13:53.617220  567375 kubeadm.go:310] 
	I0414 12:13:53.617379  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:13:53.617479  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:13:53.617552  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:13:53.617615  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:13:53.617621  567375 kubeadm.go:310] 
	I0414 12:13:53.618369  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:13:53.618463  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:13:53.618564  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 12:13:53.618776  567375 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 12:13:53.618845  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:13:54.079747  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:13:54.094028  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:13:54.103509  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:13:54.103536  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:13:54.103601  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:13:54.112305  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:13:54.112379  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:13:54.121095  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:13:54.129511  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:13:54.129569  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:13:54.138481  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.147165  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:13:54.147236  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.157633  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:13:54.167514  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:13:54.167580  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:13:54.177012  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:13:54.380519  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:15:50.310615  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:15:50.310709  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 12:15:50.312555  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:15:50.312621  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:15:50.312752  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:15:50.312914  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:15:50.313060  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:15:50.313152  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:15:50.316148  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:15:50.316217  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:15:50.316295  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:15:50.316380  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:15:50.316450  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:15:50.316548  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:15:50.316653  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:15:50.316746  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:15:50.316835  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:15:50.316942  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:15:50.317005  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:15:50.317040  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:15:50.317086  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:15:50.317133  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:15:50.317180  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:15:50.317230  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:15:50.317288  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:15:50.317415  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:15:50.317492  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:15:50.317525  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:15:50.317593  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:15:50.319132  567375 out.go:235]   - Booting up control plane ...
	I0414 12:15:50.319215  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:15:50.319298  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:15:50.319374  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:15:50.319478  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:15:50.319619  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:15:50.319660  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:15:50.319744  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.319956  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320056  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320241  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320326  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320504  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320593  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320780  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320883  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.321042  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.321060  567375 kubeadm.go:310] 
	I0414 12:15:50.321125  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:15:50.321180  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:15:50.321189  567375 kubeadm.go:310] 
	I0414 12:15:50.321243  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:15:50.321291  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:15:50.321409  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:15:50.321418  567375 kubeadm.go:310] 
	I0414 12:15:50.321529  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:15:50.321561  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:15:50.321589  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:15:50.321601  567375 kubeadm.go:310] 
	I0414 12:15:50.321700  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:15:50.321774  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:15:50.321780  567375 kubeadm.go:310] 
	I0414 12:15:50.321876  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:15:50.321967  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:15:50.322037  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:15:50.322099  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:15:50.322146  567375 kubeadm.go:310] 
	I0414 12:15:50.322192  567375 kubeadm.go:394] duration metric: took 7m57.509642242s to StartCluster
	I0414 12:15:50.322260  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:15:50.322317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:15:50.365321  567375 cri.go:89] found id: ""
	I0414 12:15:50.365360  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.365372  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:15:50.365388  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:15:50.365462  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:15:50.399917  567375 cri.go:89] found id: ""
	I0414 12:15:50.399956  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.399969  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:15:50.399977  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:15:50.400039  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:15:50.433841  567375 cri.go:89] found id: ""
	I0414 12:15:50.433889  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.433900  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:15:50.433906  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:15:50.433962  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:15:50.472959  567375 cri.go:89] found id: ""
	I0414 12:15:50.472993  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.473001  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:15:50.473008  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:15:50.473069  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:15:50.506397  567375 cri.go:89] found id: ""
	I0414 12:15:50.506434  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.506446  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:15:50.506454  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:15:50.506521  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:15:50.540645  567375 cri.go:89] found id: ""
	I0414 12:15:50.540672  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.540681  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:15:50.540687  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:15:50.540765  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:15:50.574232  567375 cri.go:89] found id: ""
	I0414 12:15:50.574263  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.574272  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:15:50.574278  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:15:50.574333  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:15:50.607014  567375 cri.go:89] found id: ""
	I0414 12:15:50.607044  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.607051  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:15:50.607063  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:15:50.607075  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:15:50.660430  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:15:50.660471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:15:50.676411  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:15:50.676454  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:15:50.782951  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:15:50.782981  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:15:50.782994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:15:50.886201  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:15:50.886250  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 12:15:50.923193  567375 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 12:15:50.923259  567375 out.go:270] * 
	W0414 12:15:50.923378  567375 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.923400  567375 out.go:270] * 
	W0414 12:15:50.924263  567375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 12:15:50.927535  567375 out.go:201] 
	W0414 12:15:50.928729  567375 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.928768  567375 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 12:15:50.928787  567375 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 12:15:50.930136  567375 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.940025397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632951940004817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb1ed04e-3882-4f0a-bb1b-c05e9d3eb6a0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.940710374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c76ba38-432b-4ff4-845d-b72db6d59547 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.940756566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c76ba38-432b-4ff4-845d-b72db6d59547 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.940802213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5c76ba38-432b-4ff4-845d-b72db6d59547 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.971383658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d990571-5243-4dcc-8fe7-e4a2de2c0300 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.971474957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d990571-5243-4dcc-8fe7-e4a2de2c0300 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.972921891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05d6f51c-9c2b-4229-b46a-b93277ea493e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.973271161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632951973252684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05d6f51c-9c2b-4229-b46a-b93277ea493e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.973712486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e31d656b-1347-4801-bf7a-d042f4ca2653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.973783103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e31d656b-1347-4801-bf7a-d042f4ca2653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:51 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:51.973818253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e31d656b-1347-4801-bf7a-d042f4ca2653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.005207955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e69a4c8-e3d2-4797-bdd6-65a7290df8d9 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.005276191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e69a4c8-e3d2-4797-bdd6-65a7290df8d9 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.006171303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=326f2ec0-158e-4104-b114-43a0ac3cf9a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.006601470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632952006576173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=326f2ec0-158e-4104-b114-43a0ac3cf9a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.007130372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdc47465-8966-4b75-8962-e0e2572ab005 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.007178760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdc47465-8966-4b75-8962-e0e2572ab005 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.007209498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bdc47465-8966-4b75-8962-e0e2572ab005 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.037958195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=404b401a-d9e7-412b-bbe8-7b7973e3eaa2 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.038039852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=404b401a-d9e7-412b-bbe8-7b7973e3eaa2 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.039141577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87f6b0bd-1dc0-48b0-a6a3-61b3f403d4af name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.039563033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744632952039541024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87f6b0bd-1dc0-48b0-a6a3-61b3f403d4af name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.040217313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4aa9ead-0153-4ff2-a6cd-65f3ec81bd2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.040271903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4aa9ead-0153-4ff2-a6cd-65f3ec81bd2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:15:52 old-k8s-version-071646 crio[630]: time="2025-04-14 12:15:52.040313637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a4aa9ead-0153-4ff2-a6cd-65f3ec81bd2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038097] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.980944] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.014254] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.547396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.450729] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064215] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062646] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.183413] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.130846] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.237445] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.667185] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.071958] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.250554] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Apr14 12:08] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 12:11] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Apr14 12:13] systemd-fstab-generator[5319]: Ignoring "noauto" option for root device
	[  +0.058925] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:15:52 up 8 min,  0 users,  load average: 0.01, 0.08, 0.06
	Linux old-k8s-version-071646 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00061eef0)
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a73ef0, 0x4f0ac20, 0xc0009f4640, 0x1, 0xc0001020c0)
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d1260, 0xc0001020c0)
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009f83f0, 0xc0009cd1c0)
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 14 12:15:50 old-k8s-version-071646 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 12:15:50 old-k8s-version-071646 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 12:15:50 old-k8s-version-071646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 14 12:15:50 old-k8s-version-071646 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 12:15:50 old-k8s-version-071646 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5550]: I0414 12:15:50.758159    5550 server.go:416] Version: v1.20.0
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5550]: I0414 12:15:50.758467    5550 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5550]: I0414 12:15:50.760296    5550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5550]: W0414 12:15:50.761222    5550 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 12:15:50 old-k8s-version-071646 kubelet[5550]: I0414 12:15:50.761626    5550 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (234.9054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-071646" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:15:55.783638  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:16:30.271166  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/default-k8s-diff-port-477612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:16:51.971105  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:17:06.615362  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:17:20.784267  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:17:34.586333  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/no-preload-500740/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:18:02.290024  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/no-preload-500740/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:18:14.426004  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:18:29.680948  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:18:40.357277  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:18:46.407845  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/default-k8s-diff-port-477612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:18:51.499610  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:19:14.112889  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/default-k8s-diff-port-477612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:19:37.492475  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:19:58.632554  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:20:10.055735  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:20:14.564355  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:20:55.783632  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:21:21.695813  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:21:33.121052  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:21:43.439768  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:21:51.971559  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:22:06.615570  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:22:18.847283  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:22:20.784813  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:22:34.586344  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/no-preload-500740/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:23:14.426809  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:23:15.034858  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:23:40.356863  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:23:46.406969  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/default-k8s-diff-port-477612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:23:51.499648  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (235.503172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-071646" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (217.227807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-071646 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-751466 image list                          | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| delete  | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| start   | -p newest-cni-104469 --memory=2200 --alsologtostderr   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-500740 image list                           | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| delete  | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| addons  | enable metrics-server -p newest-cni-104469             | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-104469                  | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-104469 --memory=2200 --alsologtostderr   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-477612                           | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	| image   | newest-cni-104469 image list                           | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:12 UTC | 14 Apr 25 12:12 UTC |
	| delete  | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:12 UTC | 14 Apr 25 12:12 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:11:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:11:21.120181  569647 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:11:21.120306  569647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:11:21.120314  569647 out.go:358] Setting ErrFile to fd 2...
	I0414 12:11:21.120321  569647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:11:21.120558  569647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:11:21.121141  569647 out.go:352] Setting JSON to false
	I0414 12:11:21.122099  569647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21232,"bootTime":1744611449,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:11:21.122165  569647 start.go:139] virtualization: kvm guest
	I0414 12:11:21.125125  569647 out.go:177] * [newest-cni-104469] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:11:21.126818  569647 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:11:21.126843  569647 notify.go:220] Checking for updates...
	I0414 12:11:21.129634  569647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:11:21.130894  569647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:21.132126  569647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:11:21.133333  569647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:11:21.134633  569647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:11:21.136670  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:21.137109  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.137207  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.153425  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0414 12:11:21.153887  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.154408  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.154435  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.154848  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.155038  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.155280  569647 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:11:21.155578  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.155618  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.171468  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0414 12:11:21.172092  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.172627  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.172657  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.173069  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.173264  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.212393  569647 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:11:21.213612  569647 start.go:297] selected driver: kvm2
	I0414 12:11:21.213629  569647 start.go:901] validating driver "kvm2" against &{Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:21.213754  569647 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:11:21.214497  569647 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:11:21.214593  569647 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:11:21.230852  569647 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:11:21.231270  569647 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 12:11:21.231336  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:21.231396  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:21.231436  569647 start.go:340] cluster config:
	{Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:21.231575  569647 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:11:21.233331  569647 out.go:177] * Starting "newest-cni-104469" primary control-plane node in "newest-cni-104469" cluster
	I0414 12:11:21.234770  569647 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:11:21.234813  569647 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:11:21.234825  569647 cache.go:56] Caching tarball of preloaded images
	I0414 12:11:21.234902  569647 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:11:21.234912  569647 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:11:21.235013  569647 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/config.json ...
	I0414 12:11:21.235220  569647 start.go:360] acquireMachinesLock for newest-cni-104469: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:11:21.235263  569647 start.go:364] duration metric: took 25.31µs to acquireMachinesLock for "newest-cni-104469"
	I0414 12:11:21.235277  569647 start.go:96] Skipping create...Using existing machine configuration
	I0414 12:11:21.235284  569647 fix.go:54] fixHost starting: 
	I0414 12:11:21.235603  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.235648  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.250885  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0414 12:11:21.251441  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.251920  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.251949  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.252312  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.252478  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.252628  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:21.254356  569647 fix.go:112] recreateIfNeeded on newest-cni-104469: state=Stopped err=<nil>
	I0414 12:11:21.254385  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	W0414 12:11:21.254563  569647 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 12:11:21.256588  569647 out.go:177] * Restarting existing kvm2 VM for "newest-cni-104469" ...
	I0414 12:11:20.198916  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:20.198958  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:20.238329  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:20.238362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:22.793258  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:22.807500  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:22.807583  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:22.844169  567375 cri.go:89] found id: ""
	I0414 12:11:22.844198  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.844210  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:22.844218  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:22.844283  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:22.883943  567375 cri.go:89] found id: ""
	I0414 12:11:22.883974  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.883986  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:22.883994  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:22.884063  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:22.918904  567375 cri.go:89] found id: ""
	I0414 12:11:22.918938  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.918950  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:22.918958  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:22.919015  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:22.959839  567375 cri.go:89] found id: ""
	I0414 12:11:22.959879  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.959892  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:22.959900  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:22.959966  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:23.002272  567375 cri.go:89] found id: ""
	I0414 12:11:23.002301  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.002313  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:23.002324  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:23.002392  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:23.037206  567375 cri.go:89] found id: ""
	I0414 12:11:23.037242  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.037254  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:23.037262  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:23.037339  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:23.073871  567375 cri.go:89] found id: ""
	I0414 12:11:23.073898  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.073907  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:23.073912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:23.073974  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:23.118533  567375 cri.go:89] found id: ""
	I0414 12:11:23.118571  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.118584  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:23.118597  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:23.118615  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:23.133894  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:23.133938  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:23.226964  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:23.226992  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:23.227010  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:23.352810  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:23.352855  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:23.402260  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:23.402297  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:21.257925  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Start
	I0414 12:11:21.258153  569647 main.go:141] libmachine: (newest-cni-104469) starting domain...
	I0414 12:11:21.258183  569647 main.go:141] libmachine: (newest-cni-104469) ensuring networks are active...
	I0414 12:11:21.259127  569647 main.go:141] libmachine: (newest-cni-104469) Ensuring network default is active
	I0414 12:11:21.259517  569647 main.go:141] libmachine: (newest-cni-104469) Ensuring network mk-newest-cni-104469 is active
	I0414 12:11:21.260074  569647 main.go:141] libmachine: (newest-cni-104469) getting domain XML...
	I0414 12:11:21.260776  569647 main.go:141] libmachine: (newest-cni-104469) creating domain...
	I0414 12:11:22.524766  569647 main.go:141] libmachine: (newest-cni-104469) waiting for IP...
	I0414 12:11:22.525521  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:22.526003  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:22.526073  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:22.526003  569682 retry.go:31] will retry after 307.883967ms: waiting for domain to come up
	I0414 12:11:22.835858  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:22.836463  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:22.836493  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:22.836420  569682 retry.go:31] will retry after 334.279409ms: waiting for domain to come up
	I0414 12:11:23.172155  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:23.172695  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:23.172727  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:23.172660  569682 retry.go:31] will retry after 299.810788ms: waiting for domain to come up
	I0414 12:11:23.474019  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:23.474427  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:23.474451  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:23.474416  569682 retry.go:31] will retry after 607.883043ms: waiting for domain to come up
	I0414 12:11:24.084316  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:24.084843  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:24.084887  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:24.084803  569682 retry.go:31] will retry after 665.362972ms: waiting for domain to come up
	I0414 12:11:24.751457  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:24.752025  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:24.752048  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:24.752008  569682 retry.go:31] will retry after 745.34954ms: waiting for domain to come up
	I0414 12:11:25.499392  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:25.544776  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:25.544821  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:25.544720  569682 retry.go:31] will retry after 908.451126ms: waiting for domain to come up
	I0414 12:11:25.957521  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:25.970937  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:25.971011  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:26.004566  567375 cri.go:89] found id: ""
	I0414 12:11:26.004601  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.004612  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:26.004620  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:26.004683  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:26.044984  567375 cri.go:89] found id: ""
	I0414 12:11:26.045016  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.045029  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:26.045037  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:26.045102  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:26.077283  567375 cri.go:89] found id: ""
	I0414 12:11:26.077316  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.077328  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:26.077336  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:26.077403  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:26.115453  567375 cri.go:89] found id: ""
	I0414 12:11:26.115478  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.115486  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:26.115493  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:26.115547  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:26.154963  567375 cri.go:89] found id: ""
	I0414 12:11:26.155002  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.155013  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:26.155021  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:26.155115  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:26.192115  567375 cri.go:89] found id: ""
	I0414 12:11:26.192148  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.192160  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:26.192169  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:26.192230  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:26.233202  567375 cri.go:89] found id: ""
	I0414 12:11:26.233236  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.233248  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:26.233256  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:26.233320  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:26.267547  567375 cri.go:89] found id: ""
	I0414 12:11:26.267579  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.267591  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:26.267602  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:26.267618  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:26.331976  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:26.332017  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:26.345893  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:26.345942  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:26.424476  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:26.424502  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:26.424518  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:26.513728  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:26.513763  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:29.057175  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:29.073805  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:29.073912  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:29.105549  567375 cri.go:89] found id: ""
	I0414 12:11:29.105578  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.105586  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:29.105594  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:29.105663  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:29.137613  567375 cri.go:89] found id: ""
	I0414 12:11:29.137643  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.137652  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:29.137658  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:29.137712  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:29.169687  567375 cri.go:89] found id: ""
	I0414 12:11:29.169726  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.169739  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:29.169752  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:29.169837  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:29.202019  567375 cri.go:89] found id: ""
	I0414 12:11:29.202054  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.202068  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:29.202077  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:29.202153  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:29.233953  567375 cri.go:89] found id: ""
	I0414 12:11:29.233991  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.234004  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:29.234014  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:29.234083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:29.267465  567375 cri.go:89] found id: ""
	I0414 12:11:29.267498  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.267511  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:29.267518  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:29.267585  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:29.301872  567375 cri.go:89] found id: ""
	I0414 12:11:29.301897  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.301905  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:29.301912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:29.301965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:29.336739  567375 cri.go:89] found id: ""
	I0414 12:11:29.336778  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.336792  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:29.336804  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:29.336821  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:29.386826  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:29.386867  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:29.402381  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:29.402411  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:29.471119  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:29.471146  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:29.471162  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:29.549103  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:29.549147  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:26.454591  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:26.455304  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:26.455337  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:26.455253  569682 retry.go:31] will retry after 971.962699ms: waiting for domain to come up
	I0414 12:11:27.428593  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:27.429086  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:27.429145  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:27.429061  569682 retry.go:31] will retry after 1.858464483s: waiting for domain to come up
	I0414 12:11:29.290212  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:29.290765  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:29.290794  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:29.290721  569682 retry.go:31] will retry after 1.729999321s: waiting for domain to come up
	I0414 12:11:31.022585  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:31.023131  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:31.023154  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:31.023104  569682 retry.go:31] will retry after 1.833182014s: waiting for domain to come up
	I0414 12:11:32.093046  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:32.111567  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:32.111656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:32.147814  567375 cri.go:89] found id: ""
	I0414 12:11:32.147845  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.147856  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:32.147865  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:32.147932  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:32.184293  567375 cri.go:89] found id: ""
	I0414 12:11:32.184327  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.184337  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:32.184345  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:32.184415  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:32.220242  567375 cri.go:89] found id: ""
	I0414 12:11:32.220283  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.220294  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:32.220302  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:32.220368  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:32.259235  567375 cri.go:89] found id: ""
	I0414 12:11:32.259274  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.259302  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:32.259320  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:32.259395  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:32.296349  567375 cri.go:89] found id: ""
	I0414 12:11:32.296383  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.296396  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:32.296404  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:32.296477  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:32.337046  567375 cri.go:89] found id: ""
	I0414 12:11:32.337078  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.337097  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:32.337106  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:32.337181  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:32.370809  567375 cri.go:89] found id: ""
	I0414 12:11:32.370841  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.370855  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:32.370864  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:32.370923  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:32.409908  567375 cri.go:89] found id: ""
	I0414 12:11:32.409936  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.409945  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:32.409955  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:32.409967  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:32.463974  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:32.464019  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:32.478989  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:32.479020  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:32.547623  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:32.547647  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:32.547659  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:32.635676  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:32.635716  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:32.858397  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:32.858993  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:32.859046  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:32.858997  569682 retry.go:31] will retry after 2.287767065s: waiting for domain to come up
	I0414 12:11:35.148507  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:35.149113  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:35.149168  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:35.149076  569682 retry.go:31] will retry after 3.709674414s: waiting for domain to come up
	I0414 12:11:35.172933  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:35.185360  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:35.185430  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:35.215587  567375 cri.go:89] found id: ""
	I0414 12:11:35.215619  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.215630  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:35.215639  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:35.215703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:35.246725  567375 cri.go:89] found id: ""
	I0414 12:11:35.246756  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.246769  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:35.246777  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:35.246842  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:35.277582  567375 cri.go:89] found id: ""
	I0414 12:11:35.277615  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.277627  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:35.277634  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:35.277703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:35.308852  567375 cri.go:89] found id: ""
	I0414 12:11:35.308884  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.308896  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:35.308904  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:35.308976  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:35.344753  567375 cri.go:89] found id: ""
	I0414 12:11:35.344785  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.344805  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:35.344813  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:35.344889  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:35.375334  567375 cri.go:89] found id: ""
	I0414 12:11:35.375369  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.375382  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:35.375392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:35.375461  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:35.407962  567375 cri.go:89] found id: ""
	I0414 12:11:35.407995  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.408003  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:35.408009  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:35.408072  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:35.438923  567375 cri.go:89] found id: ""
	I0414 12:11:35.438951  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.438959  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:35.438969  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:35.438982  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:35.451619  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:35.451655  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:35.515840  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:35.515872  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:35.515890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:35.591791  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:35.591838  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:35.629963  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:35.629994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.177510  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:38.189629  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:38.189703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:38.221893  567375 cri.go:89] found id: ""
	I0414 12:11:38.221930  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.221943  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:38.221952  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:38.222022  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:38.253207  567375 cri.go:89] found id: ""
	I0414 12:11:38.253238  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.253246  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:38.253254  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:38.253314  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:38.284207  567375 cri.go:89] found id: ""
	I0414 12:11:38.284237  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.284250  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:38.284259  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:38.284317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:38.316011  567375 cri.go:89] found id: ""
	I0414 12:11:38.316042  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.316055  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:38.316062  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:38.316129  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:38.346662  567375 cri.go:89] found id: ""
	I0414 12:11:38.346694  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.346706  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:38.346715  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:38.346775  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:38.378428  567375 cri.go:89] found id: ""
	I0414 12:11:38.378460  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.378468  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:38.378474  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:38.378527  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:38.409730  567375 cri.go:89] found id: ""
	I0414 12:11:38.409781  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.409793  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:38.409803  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:38.409880  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:38.441413  567375 cri.go:89] found id: ""
	I0414 12:11:38.441439  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.441448  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:38.441458  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:38.441471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.488672  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:38.488723  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:38.501037  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:38.501066  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:38.563620  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:38.563643  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:38.563660  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:38.637874  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:38.637912  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:38.861814  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.862326  569647 main.go:141] libmachine: (newest-cni-104469) found domain IP: 192.168.61.116
	I0414 12:11:38.862344  569647 main.go:141] libmachine: (newest-cni-104469) reserving static IP address...
	I0414 12:11:38.862354  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has current primary IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.862810  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "newest-cni-104469", mac: "52:54:00:db:0b:38", ip: "192.168.61.116"} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.862843  569647 main.go:141] libmachine: (newest-cni-104469) DBG | skip adding static IP to network mk-newest-cni-104469 - found existing host DHCP lease matching {name: "newest-cni-104469", mac: "52:54:00:db:0b:38", ip: "192.168.61.116"}
	I0414 12:11:38.862859  569647 main.go:141] libmachine: (newest-cni-104469) reserved static IP address 192.168.61.116 for domain newest-cni-104469
	I0414 12:11:38.862870  569647 main.go:141] libmachine: (newest-cni-104469) waiting for SSH...
	I0414 12:11:38.862881  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Getting to WaitForSSH function...
	I0414 12:11:38.865098  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.865437  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.865470  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.865529  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Using SSH client type: external
	I0414 12:11:38.865560  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa (-rw-------)
	I0414 12:11:38.865587  569647 main.go:141] libmachine: (newest-cni-104469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:11:38.865634  569647 main.go:141] libmachine: (newest-cni-104469) DBG | About to run SSH command:
	I0414 12:11:38.865654  569647 main.go:141] libmachine: (newest-cni-104469) DBG | exit 0
	I0414 12:11:38.991362  569647 main.go:141] libmachine: (newest-cni-104469) DBG | SSH cmd err, output: <nil>: 
	I0414 12:11:38.991738  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetConfigRaw
	I0414 12:11:38.992363  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:38.995348  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.995739  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.995763  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.996122  569647 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/config.json ...
	I0414 12:11:38.996361  569647 machine.go:93] provisionDockerMachine start ...
	I0414 12:11:38.996390  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:38.996627  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:38.998988  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.999418  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.999442  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.999619  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:38.999790  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:38.999942  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.000165  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.000352  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.000637  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.000650  569647 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 12:11:39.111566  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 12:11:39.111601  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.111888  569647 buildroot.go:166] provisioning hostname "newest-cni-104469"
	I0414 12:11:39.111921  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.112099  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.114831  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.115201  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.115231  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.115348  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.115518  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.115681  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.115834  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.116016  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.116227  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.116243  569647 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-104469 && echo "newest-cni-104469" | sudo tee /etc/hostname
	I0414 12:11:39.235982  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-104469
	
	I0414 12:11:39.236023  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.238767  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.239126  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.239154  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.239375  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.239553  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.239730  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.239848  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.239994  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.240236  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.240253  569647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-104469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-104469/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-104469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:11:39.359797  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:11:39.359831  569647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 12:11:39.359856  569647 buildroot.go:174] setting up certificates
	I0414 12:11:39.359871  569647 provision.go:84] configureAuth start
	I0414 12:11:39.359887  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.360227  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:39.363241  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.363632  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.363661  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.363808  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.366517  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.366878  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.366923  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.367093  569647 provision.go:143] copyHostCerts
	I0414 12:11:39.367154  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 12:11:39.367178  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 12:11:39.367259  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 12:11:39.367409  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 12:11:39.367422  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 12:11:39.367461  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 12:11:39.367564  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 12:11:39.367576  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 12:11:39.367609  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 12:11:39.367696  569647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.newest-cni-104469 san=[127.0.0.1 192.168.61.116 localhost minikube newest-cni-104469]
	I0414 12:11:39.512453  569647 provision.go:177] copyRemoteCerts
	I0414 12:11:39.512534  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:11:39.512575  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.515537  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.515909  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.515945  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.516071  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.516276  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.516443  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.516573  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:39.601480  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 12:11:39.625398  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:11:39.650802  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 12:11:39.675068  569647 provision.go:87] duration metric: took 315.17938ms to configureAuth
	I0414 12:11:39.675101  569647 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:11:39.675349  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:39.675432  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.678249  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.678617  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.678663  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.678831  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.679031  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.679193  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.679332  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.679487  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.679696  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.679712  569647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:11:39.899965  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:11:39.899998  569647 machine.go:96] duration metric: took 903.619003ms to provisionDockerMachine
	I0414 12:11:39.900014  569647 start.go:293] postStartSetup for "newest-cni-104469" (driver="kvm2")
	I0414 12:11:39.900028  569647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:11:39.900053  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:39.900415  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:11:39.900451  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.903052  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.903452  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.903483  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.903679  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.903870  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.904069  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.904241  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:39.989513  569647 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:11:39.993490  569647 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:11:39.993517  569647 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 12:11:39.993594  569647 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 12:11:39.993691  569647 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 12:11:39.993814  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 12:11:40.002553  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:11:40.024737  569647 start.go:296] duration metric: took 124.706191ms for postStartSetup
	I0414 12:11:40.024779  569647 fix.go:56] duration metric: took 18.789494511s for fixHost
	I0414 12:11:40.024800  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.027427  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.027719  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.027751  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.027915  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.028129  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.028292  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.028414  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.028579  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:40.028888  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:40.028904  569647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:11:40.135768  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744632700.105724681
	
	I0414 12:11:40.135795  569647 fix.go:216] guest clock: 1744632700.105724681
	I0414 12:11:40.135802  569647 fix.go:229] Guest: 2025-04-14 12:11:40.105724681 +0000 UTC Remote: 2025-04-14 12:11:40.024782852 +0000 UTC m=+18.941186859 (delta=80.941829ms)
	I0414 12:11:40.135840  569647 fix.go:200] guest clock delta is within tolerance: 80.941829ms
	I0414 12:11:40.135845  569647 start.go:83] releasing machines lock for "newest-cni-104469", held for 18.900572975s
	I0414 12:11:40.135867  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.136110  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:40.139092  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.139498  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.139528  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.139729  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140213  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140375  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140494  569647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:11:40.140550  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.140597  569647 ssh_runner.go:195] Run: cat /version.json
	I0414 12:11:40.140620  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.143168  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143464  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143523  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.143545  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143717  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.143928  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.143941  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.143958  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.144105  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.144137  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.144273  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:40.144422  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.144572  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.144723  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:40.253928  569647 ssh_runner.go:195] Run: systemctl --version
	I0414 12:11:40.259508  569647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:11:40.399347  569647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:11:40.404975  569647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:11:40.405068  569647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:11:40.420258  569647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:11:40.420289  569647 start.go:495] detecting cgroup driver to use...
	I0414 12:11:40.420369  569647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:11:40.436755  569647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:11:40.450152  569647 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:11:40.450245  569647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:11:40.464139  569647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:11:40.477505  569647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:11:40.591544  569647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:11:40.762510  569647 docker.go:233] disabling docker service ...
	I0414 12:11:40.762590  569647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:11:40.777138  569647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:11:40.790390  569647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:11:40.907968  569647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:11:41.012941  569647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:11:41.026846  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:11:41.044129  569647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 12:11:41.044224  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.054103  569647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:11:41.054180  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.063996  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.073838  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.083706  569647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:11:41.093759  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.103550  569647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.118834  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.128734  569647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:11:41.137754  569647 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:11:41.137910  569647 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:11:41.150890  569647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:11:41.160130  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:41.274669  569647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:11:41.366746  569647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:11:41.366838  569647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:11:41.371400  569647 start.go:563] Will wait 60s for crictl version
	I0414 12:11:41.371472  569647 ssh_runner.go:195] Run: which crictl
	I0414 12:11:41.375071  569647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:11:41.414018  569647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:11:41.414145  569647 ssh_runner.go:195] Run: crio --version
	I0414 12:11:41.441601  569647 ssh_runner.go:195] Run: crio --version
	I0414 12:11:41.470278  569647 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 12:11:41.471736  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:41.474769  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:41.475176  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:41.475208  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:41.475465  569647 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 12:11:41.480427  569647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:11:41.494695  569647 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0414 12:11:41.174407  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:41.188283  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:41.188349  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:41.218963  567375 cri.go:89] found id: ""
	I0414 12:11:41.218995  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.219007  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:41.219015  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:41.219080  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:41.254974  567375 cri.go:89] found id: ""
	I0414 12:11:41.255007  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.255016  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:41.255022  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:41.255083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:41.291440  567375 cri.go:89] found id: ""
	I0414 12:11:41.291478  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.291490  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:41.291498  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:41.291566  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:41.326668  567375 cri.go:89] found id: ""
	I0414 12:11:41.326699  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.326710  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:41.326718  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:41.326788  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:41.358533  567375 cri.go:89] found id: ""
	I0414 12:11:41.358564  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.358577  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:41.358585  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:41.358656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:41.390847  567375 cri.go:89] found id: ""
	I0414 12:11:41.390892  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.390904  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:41.390916  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:41.390986  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:41.422995  567375 cri.go:89] found id: ""
	I0414 12:11:41.423029  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.423040  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:41.423047  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:41.423108  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:41.455329  567375 cri.go:89] found id: ""
	I0414 12:11:41.455359  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.455371  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:41.455384  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:41.455398  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:41.506257  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:41.506288  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:41.518836  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:41.518866  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:41.588714  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:41.588744  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:41.588764  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:41.672001  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:41.672039  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:44.216461  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:44.229313  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:44.229404  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:44.263625  567375 cri.go:89] found id: ""
	I0414 12:11:44.263662  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.263674  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:44.263682  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:44.263746  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:44.295775  567375 cri.go:89] found id: ""
	I0414 12:11:44.295815  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.295829  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:44.295836  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:44.295905  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:44.340233  567375 cri.go:89] found id: ""
	I0414 12:11:44.340270  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.340281  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:44.340289  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:44.340358  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:44.379008  567375 cri.go:89] found id: ""
	I0414 12:11:44.379046  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.379060  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:44.379070  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:44.379148  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:44.412114  567375 cri.go:89] found id: ""
	I0414 12:11:44.412151  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.412160  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:44.412166  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:44.412217  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:44.446940  567375 cri.go:89] found id: ""
	I0414 12:11:44.446967  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.446975  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:44.446982  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:44.447037  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:44.494452  567375 cri.go:89] found id: ""
	I0414 12:11:44.494491  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.494503  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:44.494511  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:44.494578  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:44.531111  567375 cri.go:89] found id: ""
	I0414 12:11:44.531158  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.531171  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:44.531185  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:44.531201  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:44.590909  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:44.590954  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:44.607376  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:44.607428  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:44.678145  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:44.678171  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:44.678190  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:44.758306  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:44.758351  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:41.495951  569647 kubeadm.go:883] updating cluster {Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-1
04469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:11:41.496082  569647 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:11:41.496147  569647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:11:41.537224  569647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 12:11:41.537318  569647 ssh_runner.go:195] Run: which lz4
	I0414 12:11:41.541348  569647 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:11:41.545374  569647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:11:41.545417  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 12:11:42.748200  569647 crio.go:462] duration metric: took 1.206904316s to copy over tarball
	I0414 12:11:42.748273  569647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:11:44.940244  569647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191944178s)
	I0414 12:11:44.940275  569647 crio.go:469] duration metric: took 2.192045159s to extract the tarball
	I0414 12:11:44.940282  569647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:11:44.976846  569647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:11:45.017205  569647 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 12:11:45.017232  569647 cache_images.go:84] Images are preloaded, skipping loading
	I0414 12:11:45.017240  569647 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.32.2 crio true true} ...
	I0414 12:11:45.017357  569647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-104469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:11:45.017443  569647 ssh_runner.go:195] Run: crio config
	I0414 12:11:45.066045  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:45.066074  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:45.066086  569647 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0414 12:11:45.066108  569647 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-104469 NodeName:newest-cni-104469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 12:11:45.066250  569647 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-104469"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.116"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:11:45.066317  569647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 12:11:45.075884  569647 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:11:45.075969  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:11:45.084969  569647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0414 12:11:45.100691  569647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:11:45.116384  569647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0414 12:11:45.131922  569647 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0414 12:11:45.135512  569647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:11:45.146978  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:45.261463  569647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:11:45.279128  569647 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469 for IP: 192.168.61.116
	I0414 12:11:45.279159  569647 certs.go:194] generating shared ca certs ...
	I0414 12:11:45.279178  569647 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:45.279434  569647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 12:11:45.279505  569647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 12:11:45.279520  569647 certs.go:256] generating profile certs ...
	I0414 12:11:45.279642  569647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/client.key
	I0414 12:11:45.279729  569647 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.key.774aa14a
	I0414 12:11:45.279810  569647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.key
	I0414 12:11:45.279954  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 12:11:45.279996  569647 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 12:11:45.280007  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:11:45.280039  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:11:45.280076  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:11:45.280105  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 12:11:45.280168  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:11:45.280847  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:11:45.314145  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 12:11:45.338428  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:11:45.370752  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:11:45.397988  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 12:11:45.425777  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 12:11:45.448378  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:11:45.472600  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 12:11:45.495315  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 12:11:45.517788  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 12:11:45.541189  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:11:45.566831  569647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:11:45.584065  569647 ssh_runner.go:195] Run: openssl version
	I0414 12:11:45.589870  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:11:45.600360  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.604736  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.604808  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.610342  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:11:45.620182  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 12:11:45.630441  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.634658  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.634747  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.640599  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 12:11:45.651269  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 12:11:45.662116  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.666678  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.666779  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.672334  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 12:11:45.682554  569647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:11:45.686828  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 12:11:45.693016  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 12:11:45.698975  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 12:11:45.704832  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 12:11:45.710682  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 12:11:45.716357  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 12:11:45.722031  569647 kubeadm.go:392] StartCluster: {Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-1044
69 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:45.722164  569647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:11:45.722256  569647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:11:45.762047  569647 cri.go:89] found id: ""
	I0414 12:11:45.762149  569647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:11:45.772159  569647 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 12:11:45.772188  569647 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 12:11:45.772238  569647 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 12:11:45.781693  569647 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 12:11:45.782599  569647 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-104469" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:45.782917  569647 kubeconfig.go:62] /home/jenkins/minikube-integration/20534-503273/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-104469" cluster setting kubeconfig missing "newest-cni-104469" context setting]
	I0414 12:11:45.783560  569647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:45.785561  569647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 12:11:45.795019  569647 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0414 12:11:45.795061  569647 kubeadm.go:1160] stopping kube-system containers ...
	I0414 12:11:45.795073  569647 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 12:11:45.795121  569647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:11:45.832759  569647 cri.go:89] found id: ""
	I0414 12:11:45.832853  569647 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 12:11:45.849887  569647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:45.860004  569647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:45.860044  569647 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:45.860105  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:45.869219  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:45.869287  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:45.878859  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:45.890576  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:45.890661  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:45.909668  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:45.918990  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:45.919080  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:45.928230  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:45.936683  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:45.936746  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:45.945411  569647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:45.954641  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.058335  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:47.316487  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.331760  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:47.331855  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:47.366754  567375 cri.go:89] found id: ""
	I0414 12:11:47.366790  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.366800  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:47.366807  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:47.366876  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:47.401386  567375 cri.go:89] found id: ""
	I0414 12:11:47.401418  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.401430  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:47.401438  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:47.401500  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:47.436630  567375 cri.go:89] found id: ""
	I0414 12:11:47.436672  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.436686  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:47.436695  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:47.436770  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:47.476106  567375 cri.go:89] found id: ""
	I0414 12:11:47.476140  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.476149  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:47.476156  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:47.476224  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:47.511092  567375 cri.go:89] found id: ""
	I0414 12:11:47.511117  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.511126  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:47.511134  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:47.511196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:47.543336  567375 cri.go:89] found id: ""
	I0414 12:11:47.543365  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.543375  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:47.543392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:47.543455  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:47.591258  567375 cri.go:89] found id: ""
	I0414 12:11:47.591282  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.591307  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:47.591315  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:47.591378  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:47.631828  567375 cri.go:89] found id: ""
	I0414 12:11:47.631858  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.631867  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:47.631888  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:47.631901  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:47.681449  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:47.681491  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:47.695772  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:47.695808  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:47.767246  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:47.767279  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:47.767312  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:47.849554  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:47.849608  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:46.644225  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.835780  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.909528  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:47.008035  569647 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:11:47.008154  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.508435  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:48.008446  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:48.509090  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:49.008987  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:49.097677  569647 api_server.go:72] duration metric: took 2.08963857s to wait for apiserver process to appear ...
	I0414 12:11:49.097719  569647 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:11:49.097747  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:49.098477  569647 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0414 12:11:49.597917  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:51.914295  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 12:11:51.914332  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 12:11:51.914351  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:51.950360  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 12:11:51.950390  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 12:11:52.098794  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:52.144939  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 12:11:52.144974  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 12:11:52.598644  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:52.602917  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 12:11:52.602941  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 12:11:53.098719  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:53.103810  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0414 12:11:53.110249  569647 api_server.go:141] control plane version: v1.32.2
	I0414 12:11:53.110286  569647 api_server.go:131] duration metric: took 4.012559017s to wait for apiserver health ...
	I0414 12:11:53.110296  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:53.110304  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:53.112437  569647 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:11:53.113774  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:11:53.123553  569647 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:11:53.140406  569647 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:11:53.150738  569647 system_pods.go:59] 8 kube-system pods found
	I0414 12:11:53.150784  569647 system_pods.go:61] "coredns-668d6bf9bc-w4bzb" [e6206551-e8cd-4eec-9fe0-d1e6a8ce92c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 12:11:53.150794  569647 system_pods.go:61] "etcd-newest-cni-104469" [2ee08cb2-71cf-4277-a620-2e489f3f2446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 12:11:53.150802  569647 system_pods.go:61] "kube-apiserver-newest-cni-104469" [14f7a41a-018f-4f66-bda8-f372f0bc5064] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 12:11:53.150816  569647 system_pods.go:61] "kube-controller-manager-newest-cni-104469" [178f361d-e24b-4bfb-a916-3507cd011e3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 12:11:53.150824  569647 system_pods.go:61] "kube-proxy-tt6kz" [3ef9ada6-36d4-4ba9-92e5-e3542317f468] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 12:11:53.150833  569647 system_pods.go:61] "kube-scheduler-newest-cni-104469" [43010056-fcfe-4ef5-a834-9651e3123276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 12:11:53.150837  569647 system_pods.go:61] "metrics-server-f79f97bbb-vrl2k" [6cec0337-8996-4c11-86b6-be3f25e2eeda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:11:53.150843  569647 system_pods.go:61] "storage-provisioner" [3998c14d-608d-43b5-a6b9-972918ac6675] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 12:11:53.150850  569647 system_pods.go:74] duration metric: took 10.421128ms to wait for pod list to return data ...
	I0414 12:11:53.150859  569647 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:11:53.153603  569647 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:11:53.153640  569647 node_conditions.go:123] node cpu capacity is 2
	I0414 12:11:53.153659  569647 node_conditions.go:105] duration metric: took 2.796154ms to run NodePressure ...
	I0414 12:11:53.153685  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:53.451414  569647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:11:53.463015  569647 ops.go:34] apiserver oom_adj: -16
	I0414 12:11:53.463044  569647 kubeadm.go:597] duration metric: took 7.690847961s to restartPrimaryControlPlane
	I0414 12:11:53.463065  569647 kubeadm.go:394] duration metric: took 7.741049865s to StartCluster
	I0414 12:11:53.463091  569647 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:53.463196  569647 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:53.464309  569647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:53.464608  569647 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:11:53.464788  569647 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:11:53.464894  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:53.464933  569647 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-104469"
	I0414 12:11:53.464954  569647 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-104469"
	I0414 12:11:53.464960  569647 addons.go:69] Setting default-storageclass=true in profile "newest-cni-104469"
	W0414 12:11:53.464967  569647 addons.go:247] addon storage-provisioner should already be in state true
	I0414 12:11:53.464983  569647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-104469"
	I0414 12:11:53.464984  569647 addons.go:69] Setting metrics-server=true in profile "newest-cni-104469"
	I0414 12:11:53.465025  569647 addons.go:238] Setting addon metrics-server=true in "newest-cni-104469"
	W0414 12:11:53.465050  569647 addons.go:247] addon metrics-server should already be in state true
	I0414 12:11:53.465049  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.464964  569647 addons.go:69] Setting dashboard=true in profile "newest-cni-104469"
	I0414 12:11:53.465086  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.465109  569647 addons.go:238] Setting addon dashboard=true in "newest-cni-104469"
	W0414 12:11:53.465120  569647 addons.go:247] addon dashboard should already be in state true
	I0414 12:11:53.465150  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.465445  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465486  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465499  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465445  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465525  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465568  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465599  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465623  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.467469  569647 out.go:177] * Verifying Kubernetes components...
	I0414 12:11:53.468679  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:53.485803  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42723
	I0414 12:11:53.486041  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0414 12:11:53.486193  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0414 12:11:53.486206  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0414 12:11:53.486456  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486715  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486835  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486935  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.487152  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487175  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487310  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487338  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487538  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.487539  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487605  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487814  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487837  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487880  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.487991  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488089  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488232  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488630  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488634  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488690  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.488754  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.488797  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488837  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.507852  569647 addons.go:238] Setting addon default-storageclass=true in "newest-cni-104469"
	W0414 12:11:53.507881  569647 addons.go:247] addon default-storageclass should already be in state true
	I0414 12:11:53.507917  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.508343  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.508402  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.511624  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0414 12:11:53.512195  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.512735  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.512757  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.513140  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.513333  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.515362  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.517751  569647 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 12:11:53.518934  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 12:11:53.518959  569647 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 12:11:53.518984  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.524940  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.525463  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.525484  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.525902  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.526120  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.526292  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.526446  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.528938  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0414 12:11:53.529570  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.530010  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.530027  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.530494  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.530752  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.531020  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37235
	I0414 12:11:53.531557  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.531663  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0414 12:11:53.532241  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.532451  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.532470  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.533132  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.533152  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.533484  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.533633  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.533647  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.534060  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.534624  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.534675  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.535130  569647 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:11:53.535462  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.536805  569647 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:11:53.536831  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:11:53.536852  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.537516  569647 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 12:11:53.538836  569647 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 12:11:50.386577  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:50.399173  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:50.399257  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:50.429909  567375 cri.go:89] found id: ""
	I0414 12:11:50.429938  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.429948  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:50.429956  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:50.430016  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:50.460948  567375 cri.go:89] found id: ""
	I0414 12:11:50.460981  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.460990  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:50.460996  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:50.461056  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:50.492141  567375 cri.go:89] found id: ""
	I0414 12:11:50.492172  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.492179  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:50.492186  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:50.492249  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:50.524274  567375 cri.go:89] found id: ""
	I0414 12:11:50.524301  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.524309  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:50.524317  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:50.524391  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:50.556554  567375 cri.go:89] found id: ""
	I0414 12:11:50.556583  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.556594  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:50.556601  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:50.556671  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:50.598848  567375 cri.go:89] found id: ""
	I0414 12:11:50.598878  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.598889  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:50.598898  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:50.598965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:50.629450  567375 cri.go:89] found id: ""
	I0414 12:11:50.629482  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.629491  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:50.629497  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:50.629550  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:50.660726  567375 cri.go:89] found id: ""
	I0414 12:11:50.660764  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.660778  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:50.660790  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:50.660809  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:50.711830  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:50.711868  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:50.724837  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:50.724869  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:50.787307  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:50.787340  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:50.787356  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:50.861702  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:50.861749  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.398783  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.412227  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:53.412304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:53.451115  567375 cri.go:89] found id: ""
	I0414 12:11:53.451149  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.451161  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:53.451170  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:53.451236  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:53.489749  567375 cri.go:89] found id: ""
	I0414 12:11:53.489783  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.489793  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:53.489801  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:53.489847  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:53.542102  567375 cri.go:89] found id: ""
	I0414 12:11:53.542122  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.542132  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:53.542140  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:53.542196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:53.582780  567375 cri.go:89] found id: ""
	I0414 12:11:53.582814  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.582827  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:53.582837  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:53.582900  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:53.616309  567375 cri.go:89] found id: ""
	I0414 12:11:53.616339  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.616355  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:53.616368  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:53.616429  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:53.650528  567375 cri.go:89] found id: ""
	I0414 12:11:53.650564  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.650578  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:53.650586  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:53.650658  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:53.687484  567375 cri.go:89] found id: ""
	I0414 12:11:53.687514  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.687525  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:53.687532  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:53.687593  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:53.729803  567375 cri.go:89] found id: ""
	I0414 12:11:53.729836  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.729848  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:53.729866  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:53.729883  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:53.787229  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:53.787281  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:53.803320  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:53.803362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:53.879853  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:53.879875  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:53.879890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:53.967553  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:53.967596  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.539970  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 12:11:53.540182  569647 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 12:11:53.540212  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.541047  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.541429  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.541524  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.541675  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.541851  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.542530  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.542705  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.543948  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.544373  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.544393  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.544612  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.544830  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.544993  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.545159  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.558318  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0414 12:11:53.558838  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.559474  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.559499  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.560235  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.560534  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.562619  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.562979  569647 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:11:53.562998  569647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:11:53.563018  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.566082  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.566663  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.566691  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.566774  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.566975  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.567140  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.567309  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.653877  569647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:11:53.673348  569647 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:11:53.673443  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.686796  569647 api_server.go:72] duration metric: took 222.138072ms to wait for apiserver process to appear ...
	I0414 12:11:53.686829  569647 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:11:53.686850  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:53.691583  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0414 12:11:53.692740  569647 api_server.go:141] control plane version: v1.32.2
	I0414 12:11:53.692773  569647 api_server.go:131] duration metric: took 5.935428ms to wait for apiserver health ...
	I0414 12:11:53.692785  569647 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:11:53.696080  569647 system_pods.go:59] 8 kube-system pods found
	I0414 12:11:53.696119  569647 system_pods.go:61] "coredns-668d6bf9bc-w4bzb" [e6206551-e8cd-4eec-9fe0-d1e6a8ce92c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 12:11:53.696131  569647 system_pods.go:61] "etcd-newest-cni-104469" [2ee08cb2-71cf-4277-a620-2e489f3f2446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 12:11:53.696144  569647 system_pods.go:61] "kube-apiserver-newest-cni-104469" [14f7a41a-018f-4f66-bda8-f372f0bc5064] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 12:11:53.696152  569647 system_pods.go:61] "kube-controller-manager-newest-cni-104469" [178f361d-e24b-4bfb-a916-3507cd011e3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 12:11:53.696166  569647 system_pods.go:61] "kube-proxy-tt6kz" [3ef9ada6-36d4-4ba9-92e5-e3542317f468] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 12:11:53.696174  569647 system_pods.go:61] "kube-scheduler-newest-cni-104469" [43010056-fcfe-4ef5-a834-9651e3123276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 12:11:53.696185  569647 system_pods.go:61] "metrics-server-f79f97bbb-vrl2k" [6cec0337-8996-4c11-86b6-be3f25e2eeda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:11:53.696194  569647 system_pods.go:61] "storage-provisioner" [3998c14d-608d-43b5-a6b9-972918ac6675] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 12:11:53.696206  569647 system_pods.go:74] duration metric: took 3.412863ms to wait for pod list to return data ...
	I0414 12:11:53.696220  569647 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:11:53.698670  569647 default_sa.go:45] found service account: "default"
	I0414 12:11:53.698689  569647 default_sa.go:55] duration metric: took 2.459718ms for default service account to be created ...
	I0414 12:11:53.698700  569647 kubeadm.go:582] duration metric: took 234.05034ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 12:11:53.698722  569647 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:11:53.700885  569647 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:11:53.700905  569647 node_conditions.go:123] node cpu capacity is 2
	I0414 12:11:53.700921  569647 node_conditions.go:105] duration metric: took 2.19269ms to run NodePressure ...
	I0414 12:11:53.700934  569647 start.go:241] waiting for startup goroutines ...
	I0414 12:11:53.730838  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:11:53.789427  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:11:53.829021  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 12:11:53.829048  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 12:11:53.841313  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 12:11:53.841347  569647 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 12:11:53.907638  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 12:11:53.907670  569647 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 12:11:53.908006  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 12:11:53.908053  569647 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 12:11:53.983378  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:11:53.983415  569647 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 12:11:54.086187  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 12:11:54.086215  569647 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 12:11:54.087358  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:11:54.186051  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 12:11:54.186081  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 12:11:54.280182  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 12:11:54.280213  569647 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 12:11:54.380765  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 12:11:54.380797  569647 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 12:11:54.389761  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.389795  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.390159  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.390186  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.390206  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.390216  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.390216  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:54.390489  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.390507  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.399373  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.399399  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.399699  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.399801  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.399744  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:54.445995  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 12:11:54.446027  569647 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 12:11:54.478086  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 12:11:54.478117  569647 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 12:11:54.547414  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 12:11:54.547444  569647 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 12:11:54.635136  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 12:11:55.810138  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020663207s)
	I0414 12:11:55.810204  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.810217  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.810539  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.810567  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.810584  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.810593  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.810853  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.810870  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.975467  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.888057826s)
	I0414 12:11:55.975538  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.975556  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.975946  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.975975  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.975977  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:55.975985  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.976010  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.976328  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:55.976401  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.976418  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.976435  569647 addons.go:479] Verifying addon metrics-server=true in "newest-cni-104469"
	I0414 12:11:56.493194  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.858004612s)
	I0414 12:11:56.493258  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:56.493276  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:56.493618  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:56.493637  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:56.493654  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:56.493669  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:56.493684  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:56.493941  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:56.493958  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:56.495184  569647 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-104469 addons enable metrics-server
	
	I0414 12:11:56.496411  569647 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 12:11:56.497675  569647 addons.go:514] duration metric: took 3.032922178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 12:11:56.497730  569647 start.go:246] waiting for cluster config update ...
	I0414 12:11:56.497749  569647 start.go:255] writing updated cluster config ...
	I0414 12:11:56.498155  569647 ssh_runner.go:195] Run: rm -f paused
	I0414 12:11:56.560467  569647 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:11:56.562298  569647 out.go:177] * Done! kubectl is now configured to use "newest-cni-104469" cluster and "default" namespace by default
	I0414 12:11:56.509793  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:56.527348  567375 kubeadm.go:597] duration metric: took 4m3.66529435s to restartPrimaryControlPlane
	W0414 12:11:56.527439  567375 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 12:11:56.527471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:11:57.129851  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:11:57.148604  567375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:57.161658  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:57.174834  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:57.174855  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:57.174903  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:57.187575  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:57.187656  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:57.200722  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:57.212875  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:57.212938  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:57.224425  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.234090  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:57.234150  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.244756  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:57.254119  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:57.254179  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:57.263664  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:11:57.335377  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:11:57.335465  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:11:57.480832  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:11:57.481011  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:11:57.481159  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:11:57.665866  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:11:57.667749  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:11:57.667857  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:11:57.667951  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:11:57.668066  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:11:57.668147  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:11:57.668265  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:11:57.668349  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:11:57.668440  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:11:57.668605  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:11:57.669216  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:11:57.669669  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:11:57.669739  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:11:57.669815  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:11:57.786691  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:11:58.140236  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:11:58.329890  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:11:58.422986  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:11:58.436920  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:11:58.438164  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:11:58.438254  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:11:58.590525  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:11:58.592980  567375 out.go:235]   - Booting up control plane ...
	I0414 12:11:58.593129  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:11:58.603522  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:11:58.603646  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:11:58.604814  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:11:58.609402  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:12:38.610672  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:12:38.611482  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:38.611732  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:43.612152  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:43.612389  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:53.612812  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:53.613076  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:13.613917  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:13.614151  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616094  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:53.616337  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616381  567375 kubeadm.go:310] 
	I0414 12:13:53.616467  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:13:53.616525  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:13:53.616535  567375 kubeadm.go:310] 
	I0414 12:13:53.616587  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:13:53.616626  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:13:53.616782  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:13:53.616803  567375 kubeadm.go:310] 
	I0414 12:13:53.616927  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:13:53.616975  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:13:53.617019  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:13:53.617040  567375 kubeadm.go:310] 
	I0414 12:13:53.617133  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:13:53.617207  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:13:53.617220  567375 kubeadm.go:310] 
	I0414 12:13:53.617379  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:13:53.617479  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:13:53.617552  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:13:53.617615  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:13:53.617621  567375 kubeadm.go:310] 
	I0414 12:13:53.618369  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:13:53.618463  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:13:53.618564  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 12:13:53.618776  567375 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 12:13:53.618845  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:13:54.079747  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:13:54.094028  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:13:54.103509  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:13:54.103536  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:13:54.103601  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:13:54.112305  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:13:54.112379  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:13:54.121095  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:13:54.129511  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:13:54.129569  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:13:54.138481  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.147165  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:13:54.147236  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.157633  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:13:54.167514  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:13:54.167580  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:13:54.177012  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:13:54.380519  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:15:50.310615  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:15:50.310709  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 12:15:50.312555  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:15:50.312621  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:15:50.312752  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:15:50.312914  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:15:50.313060  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:15:50.313152  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:15:50.316148  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:15:50.316217  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:15:50.316295  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:15:50.316380  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:15:50.316450  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:15:50.316548  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:15:50.316653  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:15:50.316746  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:15:50.316835  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:15:50.316942  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:15:50.317005  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:15:50.317040  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:15:50.317086  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:15:50.317133  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:15:50.317180  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:15:50.317230  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:15:50.317288  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:15:50.317415  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:15:50.317492  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:15:50.317525  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:15:50.317593  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:15:50.319132  567375 out.go:235]   - Booting up control plane ...
	I0414 12:15:50.319215  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:15:50.319298  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:15:50.319374  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:15:50.319478  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:15:50.319619  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:15:50.319660  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:15:50.319744  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.319956  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320056  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320241  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320326  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320504  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320593  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320780  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320883  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.321042  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.321060  567375 kubeadm.go:310] 
	I0414 12:15:50.321125  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:15:50.321180  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:15:50.321189  567375 kubeadm.go:310] 
	I0414 12:15:50.321243  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:15:50.321291  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:15:50.321409  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:15:50.321418  567375 kubeadm.go:310] 
	I0414 12:15:50.321529  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:15:50.321561  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:15:50.321589  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:15:50.321601  567375 kubeadm.go:310] 
	I0414 12:15:50.321700  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:15:50.321774  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:15:50.321780  567375 kubeadm.go:310] 
	I0414 12:15:50.321876  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:15:50.321967  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:15:50.322037  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:15:50.322099  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:15:50.322146  567375 kubeadm.go:310] 
	I0414 12:15:50.322192  567375 kubeadm.go:394] duration metric: took 7m57.509642242s to StartCluster
	I0414 12:15:50.322260  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:15:50.322317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:15:50.365321  567375 cri.go:89] found id: ""
	I0414 12:15:50.365360  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.365372  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:15:50.365388  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:15:50.365462  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:15:50.399917  567375 cri.go:89] found id: ""
	I0414 12:15:50.399956  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.399969  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:15:50.399977  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:15:50.400039  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:15:50.433841  567375 cri.go:89] found id: ""
	I0414 12:15:50.433889  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.433900  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:15:50.433906  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:15:50.433962  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:15:50.472959  567375 cri.go:89] found id: ""
	I0414 12:15:50.472993  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.473001  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:15:50.473008  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:15:50.473069  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:15:50.506397  567375 cri.go:89] found id: ""
	I0414 12:15:50.506434  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.506446  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:15:50.506454  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:15:50.506521  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:15:50.540645  567375 cri.go:89] found id: ""
	I0414 12:15:50.540672  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.540681  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:15:50.540687  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:15:50.540765  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:15:50.574232  567375 cri.go:89] found id: ""
	I0414 12:15:50.574263  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.574272  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:15:50.574278  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:15:50.574333  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:15:50.607014  567375 cri.go:89] found id: ""
	I0414 12:15:50.607044  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.607051  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:15:50.607063  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:15:50.607075  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:15:50.660430  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:15:50.660471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:15:50.676411  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:15:50.676454  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:15:50.782951  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:15:50.782981  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:15:50.782994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:15:50.886201  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:15:50.886250  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 12:15:50.923193  567375 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 12:15:50.923259  567375 out.go:270] * 
	W0414 12:15:50.923378  567375 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.923400  567375 out.go:270] * 
	W0414 12:15:50.924263  567375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 12:15:50.927535  567375 out.go:201] 
	W0414 12:15:50.928729  567375 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.928768  567375 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 12:15:50.928787  567375 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 12:15:50.930136  567375 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.435588183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633493435553714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5d01a71-504d-4b9c-8eee-74702c6ba7a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.436079108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eed1ba20-acfb-4ac7-a3a4-c5f0cda24a2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.436136871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eed1ba20-acfb-4ac7-a3a4-c5f0cda24a2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.436175098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eed1ba20-acfb-4ac7-a3a4-c5f0cda24a2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.465230332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16d30c13-567d-4f92-a61f-5cd1905df972 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.465318085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16d30c13-567d-4f92-a61f-5cd1905df972 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.466655949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50e1ffa4-ce99-4b12-901a-a67fb8ed6d12 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.467030571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633493467012112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50e1ffa4-ce99-4b12-901a-a67fb8ed6d12 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.467527321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4a39399-1754-4f99-8e11-02710db1033c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.467600116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4a39399-1754-4f99-8e11-02710db1033c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.467638238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d4a39399-1754-4f99-8e11-02710db1033c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.497128144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c42ffd1c-0c93-4450-99f7-efefce81bd39 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.497218299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c42ffd1c-0c93-4450-99f7-efefce81bd39 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.498126778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2ceb204-48da-48e6-b141-16d2a7a6e046 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.498580649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633493498556563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2ceb204-48da-48e6-b141-16d2a7a6e046 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.498994030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d843b04f-1ecb-4efa-9038-775218fe3ae4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.499056599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d843b04f-1ecb-4efa-9038-775218fe3ae4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.499106630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d843b04f-1ecb-4efa-9038-775218fe3ae4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.528415078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c7429ff-d76d-44fe-afb1-7dc25ff9b978 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.528492507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c7429ff-d76d-44fe-afb1-7dc25ff9b978 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.529295062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc1437cd-1618-41f5-aaaa-b01bb063b017 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.529727598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633493529706337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc1437cd-1618-41f5-aaaa-b01bb063b017 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.530156213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab278e06-0b17-4b68-8b6c-d392f6889b93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.530201157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab278e06-0b17-4b68-8b6c-d392f6889b93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:53 old-k8s-version-071646 crio[630]: time="2025-04-14 12:24:53.530233331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ab278e06-0b17-4b68-8b6c-d392f6889b93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038097] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.980944] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.014254] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.547396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.450729] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064215] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062646] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.183413] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.130846] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.237445] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.667185] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.071958] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.250554] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Apr14 12:08] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 12:11] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Apr14 12:13] systemd-fstab-generator[5319]: Ignoring "noauto" option for root device
	[  +0.058925] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:24:53 up 17 min,  0 users,  load average: 0.08, 0.04, 0.02
	Linux old-k8s-version-071646 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000d1a000, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b2ac00, 0x24, 0x0, ...)
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: net.(*Dialer).DialContext(0xc0002c31a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b2ac00, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00090d140, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b2ac00, 0x24, 0x60, 0x7f0ac7fb4808, 0x118, ...)
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: net/http.(*Transport).dial(0xc000999400, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b2ac00, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: net/http.(*Transport).dialConn(0xc000999400, 0x4f7fe00, 0xc000120018, 0x0, 0xc0003a66c0, 0x5, 0xc000b2ac00, 0x24, 0x0, 0xc000b3cb40, ...)
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: net/http.(*Transport).dialConnFor(0xc000999400, 0xc00094fd90)
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]: created by net/http.(*Transport).queueForDial
	Apr 14 12:24:50 old-k8s-version-071646 kubelet[6497]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 14 12:24:50 old-k8s-version-071646 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 12:24:50 old-k8s-version-071646 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 12:24:51 old-k8s-version-071646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 14 12:24:51 old-k8s-version-071646 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 12:24:51 old-k8s-version-071646 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 12:24:51 old-k8s-version-071646 kubelet[6507]: I0414 12:24:51.287315    6507 server.go:416] Version: v1.20.0
	Apr 14 12:24:51 old-k8s-version-071646 kubelet[6507]: I0414 12:24:51.287866    6507 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 12:24:51 old-k8s-version-071646 kubelet[6507]: I0414 12:24:51.289802    6507 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 12:24:51 old-k8s-version-071646 kubelet[6507]: I0414 12:24:51.290768    6507 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 14 12:24:51 old-k8s-version-071646 kubelet[6507]: W0414 12:24:51.290871    6507 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (234.22757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-071646" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:24:58.632406  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:25:10.055516  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:25:55.784612  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:26:51.971863  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:27:06.615250  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:27:20.784225  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:27:34.586320  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/no-preload-500740/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:28:14.425986  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:28:40.356765  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:28:46.407757  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/default-k8s-diff-port-477612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:28:51.499376  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:28:57.652309  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/no-preload-500740/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:29:58.632281  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:30:09.475148  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/default-k8s-diff-port-477612/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:30:10.055254  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
E0414 12:30:23.858322  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.226:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.226:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (235.261807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-071646" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-071646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-071646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.812µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-071646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (220.924702ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-071646 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-751466 image list                          | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| delete  | -p embed-certs-751466                                  | embed-certs-751466           | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| start   | -p newest-cni-104469 --memory=2200 --alsologtostderr   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-500740 image list                           | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| delete  | -p no-preload-500740                                   | no-preload-500740            | jenkins | v1.35.0 | 14 Apr 25 12:10 UTC | 14 Apr 25 12:10 UTC |
	| addons  | enable metrics-server -p newest-cni-104469             | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-104469                  | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-104469 --memory=2200 --alsologtostderr   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-477612                           | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-477612 | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | default-k8s-diff-port-477612                           |                              |         |         |                     |                     |
	| image   | newest-cni-104469 image list                           | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:11 UTC | 14 Apr 25 12:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:12 UTC | 14 Apr 25 12:12 UTC |
	| delete  | -p newest-cni-104469                                   | newest-cni-104469            | jenkins | v1.35.0 | 14 Apr 25 12:12 UTC | 14 Apr 25 12:12 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:11:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:11:21.120181  569647 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:11:21.120306  569647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:11:21.120314  569647 out.go:358] Setting ErrFile to fd 2...
	I0414 12:11:21.120321  569647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:11:21.120558  569647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 12:11:21.121141  569647 out.go:352] Setting JSON to false
	I0414 12:11:21.122099  569647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21232,"bootTime":1744611449,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:11:21.122165  569647 start.go:139] virtualization: kvm guest
	I0414 12:11:21.125125  569647 out.go:177] * [newest-cni-104469] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:11:21.126818  569647 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 12:11:21.126843  569647 notify.go:220] Checking for updates...
	I0414 12:11:21.129634  569647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:11:21.130894  569647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:21.132126  569647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 12:11:21.133333  569647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:11:21.134633  569647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:11:21.136670  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:21.137109  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.137207  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.153425  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0414 12:11:21.153887  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.154408  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.154435  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.154848  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.155038  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.155280  569647 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:11:21.155578  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.155618  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.171468  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0414 12:11:21.172092  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.172627  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.172657  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.173069  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.173264  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.212393  569647 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:11:21.213612  569647 start.go:297] selected driver: kvm2
	I0414 12:11:21.213629  569647 start.go:901] validating driver "kvm2" against &{Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:21.213754  569647 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:11:21.214497  569647 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:11:21.214593  569647 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:11:21.230852  569647 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:11:21.231270  569647 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 12:11:21.231336  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:21.231396  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:21.231436  569647 start.go:340] cluster config:
	{Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:21.231575  569647 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:11:21.233331  569647 out.go:177] * Starting "newest-cni-104469" primary control-plane node in "newest-cni-104469" cluster
	I0414 12:11:21.234770  569647 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:11:21.234813  569647 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:11:21.234825  569647 cache.go:56] Caching tarball of preloaded images
	I0414 12:11:21.234902  569647 preload.go:172] Found /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:11:21.234912  569647 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:11:21.235013  569647 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/config.json ...
	I0414 12:11:21.235220  569647 start.go:360] acquireMachinesLock for newest-cni-104469: {Name:mk9887763d4f1632e3241820221c182dd1c00c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:11:21.235263  569647 start.go:364] duration metric: took 25.31µs to acquireMachinesLock for "newest-cni-104469"
	I0414 12:11:21.235277  569647 start.go:96] Skipping create...Using existing machine configuration
	I0414 12:11:21.235284  569647 fix.go:54] fixHost starting: 
	I0414 12:11:21.235603  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:21.235648  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:21.250885  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0414 12:11:21.251441  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:21.251920  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:21.251949  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:21.252312  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:21.252478  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:21.252628  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:21.254356  569647 fix.go:112] recreateIfNeeded on newest-cni-104469: state=Stopped err=<nil>
	I0414 12:11:21.254385  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	W0414 12:11:21.254563  569647 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 12:11:21.256588  569647 out.go:177] * Restarting existing kvm2 VM for "newest-cni-104469" ...
	I0414 12:11:20.198916  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:20.198958  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:20.238329  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:20.238362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:22.793258  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:22.807500  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:22.807583  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:22.844169  567375 cri.go:89] found id: ""
	I0414 12:11:22.844198  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.844210  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:22.844218  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:22.844283  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:22.883943  567375 cri.go:89] found id: ""
	I0414 12:11:22.883974  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.883986  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:22.883994  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:22.884063  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:22.918904  567375 cri.go:89] found id: ""
	I0414 12:11:22.918938  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.918950  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:22.918958  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:22.919015  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:22.959839  567375 cri.go:89] found id: ""
	I0414 12:11:22.959879  567375 logs.go:282] 0 containers: []
	W0414 12:11:22.959892  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:22.959900  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:22.959966  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:23.002272  567375 cri.go:89] found id: ""
	I0414 12:11:23.002301  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.002313  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:23.002324  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:23.002392  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:23.037206  567375 cri.go:89] found id: ""
	I0414 12:11:23.037242  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.037254  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:23.037262  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:23.037339  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:23.073871  567375 cri.go:89] found id: ""
	I0414 12:11:23.073898  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.073907  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:23.073912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:23.073974  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:23.118533  567375 cri.go:89] found id: ""
	I0414 12:11:23.118571  567375 logs.go:282] 0 containers: []
	W0414 12:11:23.118584  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:23.118597  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:23.118615  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:23.133894  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:23.133938  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:23.226964  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:23.226992  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:23.227010  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:23.352810  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:23.352855  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:23.402260  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:23.402297  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:21.257925  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Start
	I0414 12:11:21.258153  569647 main.go:141] libmachine: (newest-cni-104469) starting domain...
	I0414 12:11:21.258183  569647 main.go:141] libmachine: (newest-cni-104469) ensuring networks are active...
	I0414 12:11:21.259127  569647 main.go:141] libmachine: (newest-cni-104469) Ensuring network default is active
	I0414 12:11:21.259517  569647 main.go:141] libmachine: (newest-cni-104469) Ensuring network mk-newest-cni-104469 is active
	I0414 12:11:21.260074  569647 main.go:141] libmachine: (newest-cni-104469) getting domain XML...
	I0414 12:11:21.260776  569647 main.go:141] libmachine: (newest-cni-104469) creating domain...
	I0414 12:11:22.524766  569647 main.go:141] libmachine: (newest-cni-104469) waiting for IP...
	I0414 12:11:22.525521  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:22.526003  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:22.526073  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:22.526003  569682 retry.go:31] will retry after 307.883967ms: waiting for domain to come up
	I0414 12:11:22.835858  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:22.836463  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:22.836493  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:22.836420  569682 retry.go:31] will retry after 334.279409ms: waiting for domain to come up
	I0414 12:11:23.172155  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:23.172695  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:23.172727  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:23.172660  569682 retry.go:31] will retry after 299.810788ms: waiting for domain to come up
	I0414 12:11:23.474019  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:23.474427  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:23.474451  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:23.474416  569682 retry.go:31] will retry after 607.883043ms: waiting for domain to come up
	I0414 12:11:24.084316  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:24.084843  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:24.084887  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:24.084803  569682 retry.go:31] will retry after 665.362972ms: waiting for domain to come up
	I0414 12:11:24.751457  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:24.752025  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:24.752048  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:24.752008  569682 retry.go:31] will retry after 745.34954ms: waiting for domain to come up
	I0414 12:11:25.499392  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:25.544776  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:25.544821  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:25.544720  569682 retry.go:31] will retry after 908.451126ms: waiting for domain to come up
	I0414 12:11:25.957521  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:25.970937  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:25.971011  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:26.004566  567375 cri.go:89] found id: ""
	I0414 12:11:26.004601  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.004612  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:26.004620  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:26.004683  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:26.044984  567375 cri.go:89] found id: ""
	I0414 12:11:26.045016  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.045029  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:26.045037  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:26.045102  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:26.077283  567375 cri.go:89] found id: ""
	I0414 12:11:26.077316  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.077328  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:26.077336  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:26.077403  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:26.115453  567375 cri.go:89] found id: ""
	I0414 12:11:26.115478  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.115486  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:26.115493  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:26.115547  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:26.154963  567375 cri.go:89] found id: ""
	I0414 12:11:26.155002  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.155013  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:26.155021  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:26.155115  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:26.192115  567375 cri.go:89] found id: ""
	I0414 12:11:26.192148  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.192160  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:26.192169  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:26.192230  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:26.233202  567375 cri.go:89] found id: ""
	I0414 12:11:26.233236  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.233248  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:26.233256  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:26.233320  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:26.267547  567375 cri.go:89] found id: ""
	I0414 12:11:26.267579  567375 logs.go:282] 0 containers: []
	W0414 12:11:26.267591  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:26.267602  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:26.267618  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:26.331976  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:26.332017  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:26.345893  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:26.345942  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:26.424476  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:26.424502  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:26.424518  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:26.513728  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:26.513763  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:29.057175  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:29.073805  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:29.073912  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:29.105549  567375 cri.go:89] found id: ""
	I0414 12:11:29.105578  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.105586  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:29.105594  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:29.105663  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:29.137613  567375 cri.go:89] found id: ""
	I0414 12:11:29.137643  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.137652  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:29.137658  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:29.137712  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:29.169687  567375 cri.go:89] found id: ""
	I0414 12:11:29.169726  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.169739  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:29.169752  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:29.169837  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:29.202019  567375 cri.go:89] found id: ""
	I0414 12:11:29.202054  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.202068  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:29.202077  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:29.202153  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:29.233953  567375 cri.go:89] found id: ""
	I0414 12:11:29.233991  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.234004  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:29.234014  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:29.234083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:29.267465  567375 cri.go:89] found id: ""
	I0414 12:11:29.267498  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.267511  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:29.267518  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:29.267585  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:29.301872  567375 cri.go:89] found id: ""
	I0414 12:11:29.301897  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.301905  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:29.301912  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:29.301965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:29.336739  567375 cri.go:89] found id: ""
	I0414 12:11:29.336778  567375 logs.go:282] 0 containers: []
	W0414 12:11:29.336792  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:29.336804  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:29.336821  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:29.386826  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:29.386867  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:29.402381  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:29.402411  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:29.471119  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:29.471146  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:29.471162  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:29.549103  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:29.549147  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:26.454591  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:26.455304  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:26.455337  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:26.455253  569682 retry.go:31] will retry after 971.962699ms: waiting for domain to come up
	I0414 12:11:27.428593  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:27.429086  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:27.429145  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:27.429061  569682 retry.go:31] will retry after 1.858464483s: waiting for domain to come up
	I0414 12:11:29.290212  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:29.290765  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:29.290794  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:29.290721  569682 retry.go:31] will retry after 1.729999321s: waiting for domain to come up
	I0414 12:11:31.022585  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:31.023131  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:31.023154  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:31.023104  569682 retry.go:31] will retry after 1.833182014s: waiting for domain to come up
	I0414 12:11:32.093046  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:32.111567  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:32.111656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:32.147814  567375 cri.go:89] found id: ""
	I0414 12:11:32.147845  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.147856  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:32.147865  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:32.147932  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:32.184293  567375 cri.go:89] found id: ""
	I0414 12:11:32.184327  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.184337  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:32.184345  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:32.184415  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:32.220242  567375 cri.go:89] found id: ""
	I0414 12:11:32.220283  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.220294  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:32.220302  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:32.220368  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:32.259235  567375 cri.go:89] found id: ""
	I0414 12:11:32.259274  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.259302  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:32.259320  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:32.259395  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:32.296349  567375 cri.go:89] found id: ""
	I0414 12:11:32.296383  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.296396  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:32.296404  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:32.296477  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:32.337046  567375 cri.go:89] found id: ""
	I0414 12:11:32.337078  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.337097  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:32.337106  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:32.337181  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:32.370809  567375 cri.go:89] found id: ""
	I0414 12:11:32.370841  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.370855  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:32.370864  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:32.370923  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:32.409908  567375 cri.go:89] found id: ""
	I0414 12:11:32.409936  567375 logs.go:282] 0 containers: []
	W0414 12:11:32.409945  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:32.409955  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:32.409967  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:32.463974  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:32.464019  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:32.478989  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:32.479020  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:32.547623  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:32.547647  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:32.547659  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:32.635676  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:32.635716  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:32.858397  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:32.858993  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:32.859046  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:32.858997  569682 retry.go:31] will retry after 2.287767065s: waiting for domain to come up
	I0414 12:11:35.148507  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:35.149113  569647 main.go:141] libmachine: (newest-cni-104469) DBG | unable to find current IP address of domain newest-cni-104469 in network mk-newest-cni-104469
	I0414 12:11:35.149168  569647 main.go:141] libmachine: (newest-cni-104469) DBG | I0414 12:11:35.149076  569682 retry.go:31] will retry after 3.709674414s: waiting for domain to come up
	I0414 12:11:35.172933  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:35.185360  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:35.185430  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:35.215587  567375 cri.go:89] found id: ""
	I0414 12:11:35.215619  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.215630  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:35.215639  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:35.215703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:35.246725  567375 cri.go:89] found id: ""
	I0414 12:11:35.246756  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.246769  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:35.246777  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:35.246842  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:35.277582  567375 cri.go:89] found id: ""
	I0414 12:11:35.277615  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.277627  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:35.277634  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:35.277703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:35.308852  567375 cri.go:89] found id: ""
	I0414 12:11:35.308884  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.308896  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:35.308904  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:35.308976  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:35.344753  567375 cri.go:89] found id: ""
	I0414 12:11:35.344785  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.344805  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:35.344813  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:35.344889  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:35.375334  567375 cri.go:89] found id: ""
	I0414 12:11:35.375369  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.375382  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:35.375392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:35.375461  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:35.407962  567375 cri.go:89] found id: ""
	I0414 12:11:35.407995  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.408003  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:35.408009  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:35.408072  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:35.438923  567375 cri.go:89] found id: ""
	I0414 12:11:35.438951  567375 logs.go:282] 0 containers: []
	W0414 12:11:35.438959  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:35.438969  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:35.438982  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:35.451619  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:35.451655  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:35.515840  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:35.515872  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:35.515890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:35.591791  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:35.591838  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:35.629963  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:35.629994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.177510  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:38.189629  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:38.189703  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:38.221893  567375 cri.go:89] found id: ""
	I0414 12:11:38.221930  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.221943  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:38.221952  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:38.222022  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:38.253207  567375 cri.go:89] found id: ""
	I0414 12:11:38.253238  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.253246  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:38.253254  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:38.253314  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:38.284207  567375 cri.go:89] found id: ""
	I0414 12:11:38.284237  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.284250  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:38.284259  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:38.284317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:38.316011  567375 cri.go:89] found id: ""
	I0414 12:11:38.316042  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.316055  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:38.316062  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:38.316129  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:38.346662  567375 cri.go:89] found id: ""
	I0414 12:11:38.346694  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.346706  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:38.346715  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:38.346775  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:38.378428  567375 cri.go:89] found id: ""
	I0414 12:11:38.378460  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.378468  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:38.378474  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:38.378527  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:38.409730  567375 cri.go:89] found id: ""
	I0414 12:11:38.409781  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.409793  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:38.409803  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:38.409880  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:38.441413  567375 cri.go:89] found id: ""
	I0414 12:11:38.441439  567375 logs.go:282] 0 containers: []
	W0414 12:11:38.441448  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:38.441458  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:38.441471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:38.488672  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:38.488723  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:38.501037  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:38.501066  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:38.563620  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:38.563643  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:38.563660  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:38.637874  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:38.637912  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:38.861814  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.862326  569647 main.go:141] libmachine: (newest-cni-104469) found domain IP: 192.168.61.116
	I0414 12:11:38.862344  569647 main.go:141] libmachine: (newest-cni-104469) reserving static IP address...
	I0414 12:11:38.862354  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has current primary IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.862810  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "newest-cni-104469", mac: "52:54:00:db:0b:38", ip: "192.168.61.116"} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.862843  569647 main.go:141] libmachine: (newest-cni-104469) DBG | skip adding static IP to network mk-newest-cni-104469 - found existing host DHCP lease matching {name: "newest-cni-104469", mac: "52:54:00:db:0b:38", ip: "192.168.61.116"}
	I0414 12:11:38.862859  569647 main.go:141] libmachine: (newest-cni-104469) reserved static IP address 192.168.61.116 for domain newest-cni-104469
	I0414 12:11:38.862870  569647 main.go:141] libmachine: (newest-cni-104469) waiting for SSH...
	I0414 12:11:38.862881  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Getting to WaitForSSH function...
	I0414 12:11:38.865098  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.865437  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.865470  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.865529  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Using SSH client type: external
	I0414 12:11:38.865560  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Using SSH private key: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa (-rw-------)
	I0414 12:11:38.865587  569647 main.go:141] libmachine: (newest-cni-104469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:11:38.865634  569647 main.go:141] libmachine: (newest-cni-104469) DBG | About to run SSH command:
	I0414 12:11:38.865654  569647 main.go:141] libmachine: (newest-cni-104469) DBG | exit 0
	I0414 12:11:38.991362  569647 main.go:141] libmachine: (newest-cni-104469) DBG | SSH cmd err, output: <nil>: 
	I0414 12:11:38.991738  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetConfigRaw
	I0414 12:11:38.992363  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:38.995348  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.995739  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.995763  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.996122  569647 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/config.json ...
	I0414 12:11:38.996361  569647 machine.go:93] provisionDockerMachine start ...
	I0414 12:11:38.996390  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:38.996627  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:38.998988  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.999418  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:38.999442  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:38.999619  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:38.999790  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:38.999942  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.000165  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.000352  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.000637  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.000650  569647 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 12:11:39.111566  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 12:11:39.111601  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.111888  569647 buildroot.go:166] provisioning hostname "newest-cni-104469"
	I0414 12:11:39.111921  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.112099  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.114831  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.115201  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.115231  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.115348  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.115518  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.115681  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.115834  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.116016  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.116227  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.116243  569647 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-104469 && echo "newest-cni-104469" | sudo tee /etc/hostname
	I0414 12:11:39.235982  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-104469
	
	I0414 12:11:39.236023  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.238767  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.239126  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.239154  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.239375  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.239553  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.239730  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.239848  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.239994  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.240236  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.240253  569647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-104469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-104469/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-104469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:11:39.359797  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:11:39.359831  569647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20534-503273/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-503273/.minikube}
	I0414 12:11:39.359856  569647 buildroot.go:174] setting up certificates
	I0414 12:11:39.359871  569647 provision.go:84] configureAuth start
	I0414 12:11:39.359887  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetMachineName
	I0414 12:11:39.360227  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:39.363241  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.363632  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.363661  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.363808  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.366517  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.366878  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.366923  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.367093  569647 provision.go:143] copyHostCerts
	I0414 12:11:39.367154  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem, removing ...
	I0414 12:11:39.367178  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem
	I0414 12:11:39.367259  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/ca.pem (1078 bytes)
	I0414 12:11:39.367409  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem, removing ...
	I0414 12:11:39.367422  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem
	I0414 12:11:39.367461  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/cert.pem (1123 bytes)
	I0414 12:11:39.367564  569647 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem, removing ...
	I0414 12:11:39.367576  569647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem
	I0414 12:11:39.367609  569647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-503273/.minikube/key.pem (1675 bytes)
	I0414 12:11:39.367696  569647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem org=jenkins.newest-cni-104469 san=[127.0.0.1 192.168.61.116 localhost minikube newest-cni-104469]
	I0414 12:11:39.512453  569647 provision.go:177] copyRemoteCerts
	I0414 12:11:39.512534  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:11:39.512575  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.515537  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.515909  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.515945  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.516071  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.516276  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.516443  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.516573  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:39.601480  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 12:11:39.625398  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:11:39.650802  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 12:11:39.675068  569647 provision.go:87] duration metric: took 315.17938ms to configureAuth
	I0414 12:11:39.675101  569647 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:11:39.675349  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:39.675432  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.678249  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.678617  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.678663  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.678831  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.679031  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.679193  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.679332  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.679487  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:39.679696  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:39.679712  569647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:11:39.899965  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:11:39.899998  569647 machine.go:96] duration metric: took 903.619003ms to provisionDockerMachine
	I0414 12:11:39.900014  569647 start.go:293] postStartSetup for "newest-cni-104469" (driver="kvm2")
	I0414 12:11:39.900028  569647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:11:39.900053  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:39.900415  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:11:39.900451  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:39.903052  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.903452  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:39.903483  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:39.903679  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:39.903870  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:39.904069  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:39.904241  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:39.989513  569647 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:11:39.993490  569647 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:11:39.993517  569647 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/addons for local assets ...
	I0414 12:11:39.993594  569647 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-503273/.minikube/files for local assets ...
	I0414 12:11:39.993691  569647 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem -> 5104442.pem in /etc/ssl/certs
	I0414 12:11:39.993814  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 12:11:40.002553  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:11:40.024737  569647 start.go:296] duration metric: took 124.706191ms for postStartSetup
	I0414 12:11:40.024779  569647 fix.go:56] duration metric: took 18.789494511s for fixHost
	I0414 12:11:40.024800  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.027427  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.027719  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.027751  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.027915  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.028129  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.028292  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.028414  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.028579  569647 main.go:141] libmachine: Using SSH client type: native
	I0414 12:11:40.028888  569647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0414 12:11:40.028904  569647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:11:40.135768  569647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744632700.105724681
	
	I0414 12:11:40.135795  569647 fix.go:216] guest clock: 1744632700.105724681
	I0414 12:11:40.135802  569647 fix.go:229] Guest: 2025-04-14 12:11:40.105724681 +0000 UTC Remote: 2025-04-14 12:11:40.024782852 +0000 UTC m=+18.941186859 (delta=80.941829ms)
	I0414 12:11:40.135840  569647 fix.go:200] guest clock delta is within tolerance: 80.941829ms
	I0414 12:11:40.135845  569647 start.go:83] releasing machines lock for "newest-cni-104469", held for 18.900572975s
	I0414 12:11:40.135867  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.136110  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:40.139092  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.139498  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.139528  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.139729  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140213  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140375  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:40.140494  569647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:11:40.140550  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.140597  569647 ssh_runner.go:195] Run: cat /version.json
	I0414 12:11:40.140620  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:40.143168  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143464  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143523  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.143545  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.143717  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.143928  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:40.143941  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.143958  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:40.144105  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.144137  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:40.144273  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:40.144422  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:40.144572  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:40.144723  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:40.253928  569647 ssh_runner.go:195] Run: systemctl --version
	I0414 12:11:40.259508  569647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:11:40.399347  569647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:11:40.404975  569647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:11:40.405068  569647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:11:40.420258  569647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:11:40.420289  569647 start.go:495] detecting cgroup driver to use...
	I0414 12:11:40.420369  569647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:11:40.436755  569647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:11:40.450152  569647 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:11:40.450245  569647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:11:40.464139  569647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:11:40.477505  569647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:11:40.591544  569647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:11:40.762510  569647 docker.go:233] disabling docker service ...
	I0414 12:11:40.762590  569647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:11:40.777138  569647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:11:40.790390  569647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:11:40.907968  569647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:11:41.012941  569647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:11:41.026846  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:11:41.044129  569647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 12:11:41.044224  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.054103  569647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:11:41.054180  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.063996  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.073838  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.083706  569647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:11:41.093759  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.103550  569647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.118834  569647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:11:41.128734  569647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:11:41.137754  569647 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:11:41.137910  569647 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:11:41.150890  569647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:11:41.160130  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:41.274669  569647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:11:41.366746  569647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:11:41.366838  569647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:11:41.371400  569647 start.go:563] Will wait 60s for crictl version
	I0414 12:11:41.371472  569647 ssh_runner.go:195] Run: which crictl
	I0414 12:11:41.375071  569647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:11:41.414018  569647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:11:41.414145  569647 ssh_runner.go:195] Run: crio --version
	I0414 12:11:41.441601  569647 ssh_runner.go:195] Run: crio --version
	I0414 12:11:41.470278  569647 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 12:11:41.471736  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetIP
	I0414 12:11:41.474769  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:41.475176  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:41.475208  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:41.475465  569647 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 12:11:41.480427  569647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:11:41.494695  569647 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0414 12:11:41.174407  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:41.188283  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:41.188349  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:41.218963  567375 cri.go:89] found id: ""
	I0414 12:11:41.218995  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.219007  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:41.219015  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:41.219080  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:41.254974  567375 cri.go:89] found id: ""
	I0414 12:11:41.255007  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.255016  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:41.255022  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:41.255083  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:41.291440  567375 cri.go:89] found id: ""
	I0414 12:11:41.291478  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.291490  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:41.291498  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:41.291566  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:41.326668  567375 cri.go:89] found id: ""
	I0414 12:11:41.326699  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.326710  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:41.326718  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:41.326788  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:41.358533  567375 cri.go:89] found id: ""
	I0414 12:11:41.358564  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.358577  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:41.358585  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:41.358656  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:41.390847  567375 cri.go:89] found id: ""
	I0414 12:11:41.390892  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.390904  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:41.390916  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:41.390986  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:41.422995  567375 cri.go:89] found id: ""
	I0414 12:11:41.423029  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.423040  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:41.423047  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:41.423108  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:41.455329  567375 cri.go:89] found id: ""
	I0414 12:11:41.455359  567375 logs.go:282] 0 containers: []
	W0414 12:11:41.455371  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:41.455384  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:41.455398  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:41.506257  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:41.506288  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:41.518836  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:41.518866  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:41.588714  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:41.588744  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:41.588764  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:41.672001  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:41.672039  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:44.216461  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:44.229313  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:44.229404  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:44.263625  567375 cri.go:89] found id: ""
	I0414 12:11:44.263662  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.263674  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:44.263682  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:44.263746  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:44.295775  567375 cri.go:89] found id: ""
	I0414 12:11:44.295815  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.295829  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:44.295836  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:44.295905  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:44.340233  567375 cri.go:89] found id: ""
	I0414 12:11:44.340270  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.340281  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:44.340289  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:44.340358  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:44.379008  567375 cri.go:89] found id: ""
	I0414 12:11:44.379046  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.379060  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:44.379070  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:44.379148  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:44.412114  567375 cri.go:89] found id: ""
	I0414 12:11:44.412151  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.412160  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:44.412166  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:44.412217  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:44.446940  567375 cri.go:89] found id: ""
	I0414 12:11:44.446967  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.446975  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:44.446982  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:44.447037  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:44.494452  567375 cri.go:89] found id: ""
	I0414 12:11:44.494491  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.494503  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:44.494511  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:44.494578  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:44.531111  567375 cri.go:89] found id: ""
	I0414 12:11:44.531158  567375 logs.go:282] 0 containers: []
	W0414 12:11:44.531171  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:44.531185  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:44.531201  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:44.590909  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:44.590954  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:44.607376  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:44.607428  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:44.678145  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:44.678171  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:44.678190  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:44.758306  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:44.758351  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:41.495951  569647 kubeadm.go:883] updating cluster {Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-1
04469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:11:41.496082  569647 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:11:41.496147  569647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:11:41.537224  569647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 12:11:41.537318  569647 ssh_runner.go:195] Run: which lz4
	I0414 12:11:41.541348  569647 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:11:41.545374  569647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:11:41.545417  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 12:11:42.748200  569647 crio.go:462] duration metric: took 1.206904316s to copy over tarball
	I0414 12:11:42.748273  569647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:11:44.940244  569647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191944178s)
	I0414 12:11:44.940275  569647 crio.go:469] duration metric: took 2.192045159s to extract the tarball
	I0414 12:11:44.940282  569647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:11:44.976846  569647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:11:45.017205  569647 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 12:11:45.017232  569647 cache_images.go:84] Images are preloaded, skipping loading
	I0414 12:11:45.017240  569647 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.32.2 crio true true} ...
	I0414 12:11:45.017357  569647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-104469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-104469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:11:45.017443  569647 ssh_runner.go:195] Run: crio config
	I0414 12:11:45.066045  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:45.066074  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:45.066086  569647 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0414 12:11:45.066108  569647 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-104469 NodeName:newest-cni-104469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 12:11:45.066250  569647 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-104469"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.116"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:11:45.066317  569647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 12:11:45.075884  569647 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:11:45.075969  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:11:45.084969  569647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0414 12:11:45.100691  569647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:11:45.116384  569647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0414 12:11:45.131922  569647 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0414 12:11:45.135512  569647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:11:45.146978  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:45.261463  569647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:11:45.279128  569647 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469 for IP: 192.168.61.116
	I0414 12:11:45.279159  569647 certs.go:194] generating shared ca certs ...
	I0414 12:11:45.279178  569647 certs.go:226] acquiring lock for ca certs: {Name:mk2ca8042d8ce6432f652f74a69c48f600f56757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:45.279434  569647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key
	I0414 12:11:45.279505  569647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key
	I0414 12:11:45.279520  569647 certs.go:256] generating profile certs ...
	I0414 12:11:45.279642  569647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/client.key
	I0414 12:11:45.279729  569647 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.key.774aa14a
	I0414 12:11:45.279810  569647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.key
	I0414 12:11:45.279954  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem (1338 bytes)
	W0414 12:11:45.279996  569647 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444_empty.pem, impossibly tiny 0 bytes
	I0414 12:11:45.280007  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:11:45.280039  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:11:45.280076  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:11:45.280105  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/certs/key.pem (1675 bytes)
	I0414 12:11:45.280168  569647 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem (1708 bytes)
	I0414 12:11:45.280847  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:11:45.314145  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 12:11:45.338428  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:11:45.370752  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:11:45.397988  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 12:11:45.425777  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 12:11:45.448378  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:11:45.472600  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/newest-cni-104469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 12:11:45.495315  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/certs/510444.pem --> /usr/share/ca-certificates/510444.pem (1338 bytes)
	I0414 12:11:45.517788  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/ssl/certs/5104442.pem --> /usr/share/ca-certificates/5104442.pem (1708 bytes)
	I0414 12:11:45.541189  569647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-503273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:11:45.566831  569647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:11:45.584065  569647 ssh_runner.go:195] Run: openssl version
	I0414 12:11:45.589870  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:11:45.600360  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.604736  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.604808  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:11:45.610342  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:11:45.620182  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510444.pem && ln -fs /usr/share/ca-certificates/510444.pem /etc/ssl/certs/510444.pem"
	I0414 12:11:45.630441  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.634658  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 10:59 /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.634747  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510444.pem
	I0414 12:11:45.640599  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510444.pem /etc/ssl/certs/51391683.0"
	I0414 12:11:45.651269  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5104442.pem && ln -fs /usr/share/ca-certificates/5104442.pem /etc/ssl/certs/5104442.pem"
	I0414 12:11:45.662116  569647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.666678  569647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 10:59 /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.666779  569647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5104442.pem
	I0414 12:11:45.672334  569647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5104442.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 12:11:45.682554  569647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:11:45.686828  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 12:11:45.693016  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 12:11:45.698975  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 12:11:45.704832  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 12:11:45.710682  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 12:11:45.716357  569647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 12:11:45.722031  569647 kubeadm.go:392] StartCluster: {Name:newest-cni-104469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-1044
69 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:11:45.722164  569647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:11:45.722256  569647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:11:45.762047  569647 cri.go:89] found id: ""
	I0414 12:11:45.762149  569647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:11:45.772159  569647 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 12:11:45.772188  569647 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 12:11:45.772238  569647 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 12:11:45.781693  569647 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 12:11:45.782599  569647 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-104469" does not appear in /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:45.782917  569647 kubeconfig.go:62] /home/jenkins/minikube-integration/20534-503273/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-104469" cluster setting kubeconfig missing "newest-cni-104469" context setting]
	I0414 12:11:45.783560  569647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:45.785561  569647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 12:11:45.795019  569647 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0414 12:11:45.795061  569647 kubeadm.go:1160] stopping kube-system containers ...
	I0414 12:11:45.795073  569647 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 12:11:45.795121  569647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:11:45.832759  569647 cri.go:89] found id: ""
	I0414 12:11:45.832853  569647 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 12:11:45.849887  569647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:45.860004  569647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:45.860044  569647 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:45.860105  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:45.869219  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:45.869287  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:45.878859  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:45.890576  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:45.890661  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:45.909668  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:45.918990  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:45.919080  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:45.928230  569647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:45.936683  569647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:45.936746  569647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:45.945411  569647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:45.954641  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.058335  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:47.316487  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.331760  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:47.331855  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:47.366754  567375 cri.go:89] found id: ""
	I0414 12:11:47.366790  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.366800  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:47.366807  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:47.366876  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:47.401386  567375 cri.go:89] found id: ""
	I0414 12:11:47.401418  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.401430  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:47.401438  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:47.401500  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:47.436630  567375 cri.go:89] found id: ""
	I0414 12:11:47.436672  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.436686  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:47.436695  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:47.436770  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:47.476106  567375 cri.go:89] found id: ""
	I0414 12:11:47.476140  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.476149  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:47.476156  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:47.476224  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:47.511092  567375 cri.go:89] found id: ""
	I0414 12:11:47.511117  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.511126  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:47.511134  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:47.511196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:47.543336  567375 cri.go:89] found id: ""
	I0414 12:11:47.543365  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.543375  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:47.543392  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:47.543455  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:47.591258  567375 cri.go:89] found id: ""
	I0414 12:11:47.591282  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.591307  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:47.591315  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:47.591378  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:47.631828  567375 cri.go:89] found id: ""
	I0414 12:11:47.631858  567375 logs.go:282] 0 containers: []
	W0414 12:11:47.631867  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:47.631888  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:47.631901  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:47.681449  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:47.681491  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:47.695772  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:47.695808  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:47.767246  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:47.767279  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:47.767312  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:47.849554  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:47.849608  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:46.644225  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.835780  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:46.909528  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:47.008035  569647 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:11:47.008154  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:47.508435  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:48.008446  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:48.509090  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:49.008987  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:49.097677  569647 api_server.go:72] duration metric: took 2.08963857s to wait for apiserver process to appear ...
	I0414 12:11:49.097719  569647 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:11:49.097747  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:49.098477  569647 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0414 12:11:49.597917  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:51.914295  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 12:11:51.914332  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 12:11:51.914351  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:51.950360  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 12:11:51.950390  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 12:11:52.098794  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:52.144939  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 12:11:52.144974  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 12:11:52.598644  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:52.602917  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 12:11:52.602941  569647 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 12:11:53.098719  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:53.103810  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0414 12:11:53.110249  569647 api_server.go:141] control plane version: v1.32.2
	I0414 12:11:53.110286  569647 api_server.go:131] duration metric: took 4.012559017s to wait for apiserver health ...
	I0414 12:11:53.110296  569647 cni.go:84] Creating CNI manager for ""
	I0414 12:11:53.110304  569647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:11:53.112437  569647 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:11:53.113774  569647 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:11:53.123553  569647 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:11:53.140406  569647 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:11:53.150738  569647 system_pods.go:59] 8 kube-system pods found
	I0414 12:11:53.150784  569647 system_pods.go:61] "coredns-668d6bf9bc-w4bzb" [e6206551-e8cd-4eec-9fe0-d1e6a8ce92c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 12:11:53.150794  569647 system_pods.go:61] "etcd-newest-cni-104469" [2ee08cb2-71cf-4277-a620-2e489f3f2446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 12:11:53.150802  569647 system_pods.go:61] "kube-apiserver-newest-cni-104469" [14f7a41a-018f-4f66-bda8-f372f0bc5064] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 12:11:53.150816  569647 system_pods.go:61] "kube-controller-manager-newest-cni-104469" [178f361d-e24b-4bfb-a916-3507cd011e3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 12:11:53.150824  569647 system_pods.go:61] "kube-proxy-tt6kz" [3ef9ada6-36d4-4ba9-92e5-e3542317f468] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 12:11:53.150833  569647 system_pods.go:61] "kube-scheduler-newest-cni-104469" [43010056-fcfe-4ef5-a834-9651e3123276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 12:11:53.150837  569647 system_pods.go:61] "metrics-server-f79f97bbb-vrl2k" [6cec0337-8996-4c11-86b6-be3f25e2eeda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:11:53.150843  569647 system_pods.go:61] "storage-provisioner" [3998c14d-608d-43b5-a6b9-972918ac6675] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 12:11:53.150850  569647 system_pods.go:74] duration metric: took 10.421128ms to wait for pod list to return data ...
	I0414 12:11:53.150859  569647 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:11:53.153603  569647 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:11:53.153640  569647 node_conditions.go:123] node cpu capacity is 2
	I0414 12:11:53.153659  569647 node_conditions.go:105] duration metric: took 2.796154ms to run NodePressure ...
	I0414 12:11:53.153685  569647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 12:11:53.451414  569647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:11:53.463015  569647 ops.go:34] apiserver oom_adj: -16
	I0414 12:11:53.463044  569647 kubeadm.go:597] duration metric: took 7.690847961s to restartPrimaryControlPlane
	I0414 12:11:53.463065  569647 kubeadm.go:394] duration metric: took 7.741049865s to StartCluster
	I0414 12:11:53.463091  569647 settings.go:142] acquiring lock: {Name:mkb26484678cdb285726f4f09eadd211c1c462d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:53.463196  569647 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 12:11:53.464309  569647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-503273/kubeconfig: {Name:mk7fadb1af02cafc6cd01b453c568d963296b4d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:11:53.464608  569647 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:11:53.464788  569647 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 12:11:53.464894  569647 config.go:182] Loaded profile config "newest-cni-104469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:11:53.464933  569647 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-104469"
	I0414 12:11:53.464954  569647 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-104469"
	I0414 12:11:53.464960  569647 addons.go:69] Setting default-storageclass=true in profile "newest-cni-104469"
	W0414 12:11:53.464967  569647 addons.go:247] addon storage-provisioner should already be in state true
	I0414 12:11:53.464983  569647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-104469"
	I0414 12:11:53.464984  569647 addons.go:69] Setting metrics-server=true in profile "newest-cni-104469"
	I0414 12:11:53.465025  569647 addons.go:238] Setting addon metrics-server=true in "newest-cni-104469"
	W0414 12:11:53.465050  569647 addons.go:247] addon metrics-server should already be in state true
	I0414 12:11:53.465049  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.464964  569647 addons.go:69] Setting dashboard=true in profile "newest-cni-104469"
	I0414 12:11:53.465086  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.465109  569647 addons.go:238] Setting addon dashboard=true in "newest-cni-104469"
	W0414 12:11:53.465120  569647 addons.go:247] addon dashboard should already be in state true
	I0414 12:11:53.465150  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.465445  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465486  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465499  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465445  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465525  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465568  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.465599  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.465623  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.467469  569647 out.go:177] * Verifying Kubernetes components...
	I0414 12:11:53.468679  569647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:11:53.485803  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42723
	I0414 12:11:53.486041  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0414 12:11:53.486193  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0414 12:11:53.486206  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0414 12:11:53.486456  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486715  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486835  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.486935  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.487152  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487175  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487310  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487338  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487538  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.487539  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487605  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487814  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.487837  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.487880  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.487991  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488089  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488232  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.488630  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488634  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488690  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.488754  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.488797  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.488837  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.507852  569647 addons.go:238] Setting addon default-storageclass=true in "newest-cni-104469"
	W0414 12:11:53.507881  569647 addons.go:247] addon default-storageclass should already be in state true
	I0414 12:11:53.507917  569647 host.go:66] Checking if "newest-cni-104469" exists ...
	I0414 12:11:53.508343  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.508402  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.511624  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0414 12:11:53.512195  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.512735  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.512757  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.513140  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.513333  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.515362  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.517751  569647 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 12:11:53.518934  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 12:11:53.518959  569647 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 12:11:53.518984  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.524940  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.525463  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.525484  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.525902  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.526120  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.526292  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.526446  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.528938  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0414 12:11:53.529570  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.530010  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.530027  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.530494  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.530752  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.531020  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37235
	I0414 12:11:53.531557  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.531663  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0414 12:11:53.532241  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.532451  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.532470  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.533132  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.533152  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.533484  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.533633  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.533647  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.534060  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.534624  569647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:11:53.534675  569647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:11:53.535130  569647 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:11:53.535462  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.536805  569647 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:11:53.536831  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:11:53.536852  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.537516  569647 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 12:11:53.538836  569647 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 12:11:50.386577  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:50.399173  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:50.399257  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:50.429909  567375 cri.go:89] found id: ""
	I0414 12:11:50.429938  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.429948  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:50.429956  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:50.430016  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:50.460948  567375 cri.go:89] found id: ""
	I0414 12:11:50.460981  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.460990  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:50.460996  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:50.461056  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:50.492141  567375 cri.go:89] found id: ""
	I0414 12:11:50.492172  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.492179  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:50.492186  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:50.492249  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:50.524274  567375 cri.go:89] found id: ""
	I0414 12:11:50.524301  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.524309  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:50.524317  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:50.524391  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:50.556554  567375 cri.go:89] found id: ""
	I0414 12:11:50.556583  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.556594  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:50.556601  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:50.556671  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:50.598848  567375 cri.go:89] found id: ""
	I0414 12:11:50.598878  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.598889  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:50.598898  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:50.598965  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:50.629450  567375 cri.go:89] found id: ""
	I0414 12:11:50.629482  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.629491  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:50.629497  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:50.629550  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:50.660726  567375 cri.go:89] found id: ""
	I0414 12:11:50.660764  567375 logs.go:282] 0 containers: []
	W0414 12:11:50.660778  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:50.660790  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:50.660809  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:50.711830  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:50.711868  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:50.724837  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:50.724869  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:50.787307  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:50.787340  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:50.787356  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:50.861702  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:50.861749  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.398783  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.412227  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:11:53.412304  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:11:53.451115  567375 cri.go:89] found id: ""
	I0414 12:11:53.451149  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.451161  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:11:53.451170  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:11:53.451236  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:11:53.489749  567375 cri.go:89] found id: ""
	I0414 12:11:53.489783  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.489793  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:11:53.489801  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:11:53.489847  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:11:53.542102  567375 cri.go:89] found id: ""
	I0414 12:11:53.542122  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.542132  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:11:53.542140  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:11:53.542196  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:11:53.582780  567375 cri.go:89] found id: ""
	I0414 12:11:53.582814  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.582827  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:11:53.582837  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:11:53.582900  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:11:53.616309  567375 cri.go:89] found id: ""
	I0414 12:11:53.616339  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.616355  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:11:53.616368  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:11:53.616429  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:11:53.650528  567375 cri.go:89] found id: ""
	I0414 12:11:53.650564  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.650578  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:11:53.650586  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:11:53.650658  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:11:53.687484  567375 cri.go:89] found id: ""
	I0414 12:11:53.687514  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.687525  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:11:53.687532  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:11:53.687593  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:11:53.729803  567375 cri.go:89] found id: ""
	I0414 12:11:53.729836  567375 logs.go:282] 0 containers: []
	W0414 12:11:53.729848  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:11:53.729866  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:11:53.729883  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:11:53.787229  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:11:53.787281  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:11:53.803320  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:11:53.803362  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:11:53.879853  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:11:53.879875  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:11:53.879890  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:11:53.967553  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:11:53.967596  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 12:11:53.539970  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 12:11:53.540182  569647 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 12:11:53.540212  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.541047  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.541429  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.541524  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.541675  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.541851  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.542530  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.542705  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.543948  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.544373  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.544393  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.544612  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.544830  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.544993  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.545159  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.558318  569647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0414 12:11:53.558838  569647 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:11:53.559474  569647 main.go:141] libmachine: Using API Version  1
	I0414 12:11:53.559499  569647 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:11:53.560235  569647 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:11:53.560534  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetState
	I0414 12:11:53.562619  569647 main.go:141] libmachine: (newest-cni-104469) Calling .DriverName
	I0414 12:11:53.562979  569647 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:11:53.562998  569647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:11:53.563018  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHHostname
	I0414 12:11:53.566082  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.566663  569647 main.go:141] libmachine: (newest-cni-104469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:0b:38", ip: ""} in network mk-newest-cni-104469: {Iface:virbr1 ExpiryTime:2025-04-14 13:11:32 +0000 UTC Type:0 Mac:52:54:00:db:0b:38 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:newest-cni-104469 Clientid:01:52:54:00:db:0b:38}
	I0414 12:11:53.566691  569647 main.go:141] libmachine: (newest-cni-104469) DBG | domain newest-cni-104469 has defined IP address 192.168.61.116 and MAC address 52:54:00:db:0b:38 in network mk-newest-cni-104469
	I0414 12:11:53.566774  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHPort
	I0414 12:11:53.566975  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHKeyPath
	I0414 12:11:53.567140  569647 main.go:141] libmachine: (newest-cni-104469) Calling .GetSSHUsername
	I0414 12:11:53.567309  569647 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/newest-cni-104469/id_rsa Username:docker}
	I0414 12:11:53.653877  569647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:11:53.673348  569647 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:11:53.673443  569647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:53.686796  569647 api_server.go:72] duration metric: took 222.138072ms to wait for apiserver process to appear ...
	I0414 12:11:53.686829  569647 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:11:53.686850  569647 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0414 12:11:53.691583  569647 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0414 12:11:53.692740  569647 api_server.go:141] control plane version: v1.32.2
	I0414 12:11:53.692773  569647 api_server.go:131] duration metric: took 5.935428ms to wait for apiserver health ...
	I0414 12:11:53.692785  569647 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:11:53.696080  569647 system_pods.go:59] 8 kube-system pods found
	I0414 12:11:53.696119  569647 system_pods.go:61] "coredns-668d6bf9bc-w4bzb" [e6206551-e8cd-4eec-9fe0-d1e6a8ce92c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 12:11:53.696131  569647 system_pods.go:61] "etcd-newest-cni-104469" [2ee08cb2-71cf-4277-a620-2e489f3f2446] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 12:11:53.696144  569647 system_pods.go:61] "kube-apiserver-newest-cni-104469" [14f7a41a-018f-4f66-bda8-f372f0bc5064] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 12:11:53.696152  569647 system_pods.go:61] "kube-controller-manager-newest-cni-104469" [178f361d-e24b-4bfb-a916-3507cd011e3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 12:11:53.696166  569647 system_pods.go:61] "kube-proxy-tt6kz" [3ef9ada6-36d4-4ba9-92e5-e3542317f468] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 12:11:53.696174  569647 system_pods.go:61] "kube-scheduler-newest-cni-104469" [43010056-fcfe-4ef5-a834-9651e3123276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 12:11:53.696185  569647 system_pods.go:61] "metrics-server-f79f97bbb-vrl2k" [6cec0337-8996-4c11-86b6-be3f25e2eeda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:11:53.696194  569647 system_pods.go:61] "storage-provisioner" [3998c14d-608d-43b5-a6b9-972918ac6675] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 12:11:53.696206  569647 system_pods.go:74] duration metric: took 3.412863ms to wait for pod list to return data ...
	I0414 12:11:53.696220  569647 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:11:53.698670  569647 default_sa.go:45] found service account: "default"
	I0414 12:11:53.698689  569647 default_sa.go:55] duration metric: took 2.459718ms for default service account to be created ...
	I0414 12:11:53.698700  569647 kubeadm.go:582] duration metric: took 234.05034ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 12:11:53.698722  569647 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:11:53.700885  569647 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:11:53.700905  569647 node_conditions.go:123] node cpu capacity is 2
	I0414 12:11:53.700921  569647 node_conditions.go:105] duration metric: took 2.19269ms to run NodePressure ...
	I0414 12:11:53.700934  569647 start.go:241] waiting for startup goroutines ...
	I0414 12:11:53.730838  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:11:53.789427  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:11:53.829021  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 12:11:53.829048  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 12:11:53.841313  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 12:11:53.841347  569647 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 12:11:53.907638  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 12:11:53.907670  569647 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 12:11:53.908006  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 12:11:53.908053  569647 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 12:11:53.983378  569647 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:11:53.983415  569647 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 12:11:54.086187  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 12:11:54.086215  569647 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 12:11:54.087358  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:11:54.186051  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 12:11:54.186081  569647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 12:11:54.280182  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 12:11:54.280213  569647 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 12:11:54.380765  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 12:11:54.380797  569647 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 12:11:54.389761  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.389795  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.390159  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.390186  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.390206  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.390216  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.390216  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:54.390489  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.390507  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.399373  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:54.399399  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:54.399699  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:54.399801  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:54.399744  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:54.445995  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 12:11:54.446027  569647 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 12:11:54.478086  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 12:11:54.478117  569647 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 12:11:54.547414  569647 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 12:11:54.547444  569647 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 12:11:54.635136  569647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 12:11:55.810138  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020663207s)
	I0414 12:11:55.810204  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.810217  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.810539  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.810567  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.810584  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.810593  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.810853  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.810870  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.975467  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.888057826s)
	I0414 12:11:55.975538  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.975556  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.975946  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.975975  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.975977  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:55.975985  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:55.976010  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:55.976328  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:55.976401  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:55.976418  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:55.976435  569647 addons.go:479] Verifying addon metrics-server=true in "newest-cni-104469"
	I0414 12:11:56.493194  569647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.858004612s)
	I0414 12:11:56.493258  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:56.493276  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:56.493618  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:56.493637  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:56.493654  569647 main.go:141] libmachine: (newest-cni-104469) DBG | Closing plugin on server side
	I0414 12:11:56.493669  569647 main.go:141] libmachine: Making call to close driver server
	I0414 12:11:56.493684  569647 main.go:141] libmachine: (newest-cni-104469) Calling .Close
	I0414 12:11:56.493941  569647 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:11:56.493958  569647 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:11:56.495184  569647 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-104469 addons enable metrics-server
	
	I0414 12:11:56.496411  569647 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 12:11:56.497675  569647 addons.go:514] duration metric: took 3.032922178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 12:11:56.497730  569647 start.go:246] waiting for cluster config update ...
	I0414 12:11:56.497749  569647 start.go:255] writing updated cluster config ...
	I0414 12:11:56.498155  569647 ssh_runner.go:195] Run: rm -f paused
	I0414 12:11:56.560467  569647 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:11:56.562298  569647 out.go:177] * Done! kubectl is now configured to use "newest-cni-104469" cluster and "default" namespace by default
	I0414 12:11:56.509793  567375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:11:56.527348  567375 kubeadm.go:597] duration metric: took 4m3.66529435s to restartPrimaryControlPlane
	W0414 12:11:56.527439  567375 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 12:11:56.527471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:11:57.129851  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:11:57.148604  567375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:11:57.161658  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:11:57.174834  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:11:57.174855  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:11:57.174903  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:11:57.187575  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:11:57.187656  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:11:57.200722  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:11:57.212875  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:11:57.212938  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:11:57.224425  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.234090  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:11:57.234150  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:11:57.244756  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:11:57.254119  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:11:57.254179  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:11:57.263664  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:11:57.335377  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:11:57.335465  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:11:57.480832  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:11:57.481011  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:11:57.481159  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:11:57.665866  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:11:57.667749  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:11:57.667857  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:11:57.667951  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:11:57.668066  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:11:57.668147  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:11:57.668265  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:11:57.668349  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:11:57.668440  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:11:57.668605  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:11:57.669216  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:11:57.669669  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:11:57.669739  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:11:57.669815  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:11:57.786691  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:11:58.140236  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:11:58.329890  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:11:58.422986  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:11:58.436920  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:11:58.438164  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:11:58.438254  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:11:58.590525  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:11:58.592980  567375 out.go:235]   - Booting up control plane ...
	I0414 12:11:58.593129  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:11:58.603522  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:11:58.603646  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:11:58.604814  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:11:58.609402  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:12:38.610672  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:12:38.611482  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:38.611732  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:43.612152  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:43.612389  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:12:53.612812  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:12:53.613076  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:13.613917  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:13.614151  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616094  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:13:53.616337  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:13:53.616381  567375 kubeadm.go:310] 
	I0414 12:13:53.616467  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:13:53.616525  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:13:53.616535  567375 kubeadm.go:310] 
	I0414 12:13:53.616587  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:13:53.616626  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:13:53.616782  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:13:53.616803  567375 kubeadm.go:310] 
	I0414 12:13:53.616927  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:13:53.616975  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:13:53.617019  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:13:53.617040  567375 kubeadm.go:310] 
	I0414 12:13:53.617133  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:13:53.617207  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:13:53.617220  567375 kubeadm.go:310] 
	I0414 12:13:53.617379  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:13:53.617479  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:13:53.617552  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:13:53.617615  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:13:53.617621  567375 kubeadm.go:310] 
	I0414 12:13:53.618369  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:13:53.618463  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:13:53.618564  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 12:13:53.618776  567375 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 12:13:53.618845  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 12:13:54.079747  567375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:13:54.094028  567375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:13:54.103509  567375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:13:54.103536  567375 kubeadm.go:157] found existing configuration files:
	
	I0414 12:13:54.103601  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:13:54.112305  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:13:54.112379  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:13:54.121095  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:13:54.129511  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:13:54.129569  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:13:54.138481  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.147165  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:13:54.147236  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:13:54.157633  567375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:13:54.167514  567375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:13:54.167580  567375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:13:54.177012  567375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:13:54.380519  567375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:15:50.310615  567375 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 12:15:50.310709  567375 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 12:15:50.312555  567375 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 12:15:50.312621  567375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:15:50.312752  567375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:15:50.312914  567375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:15:50.313060  567375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 12:15:50.313152  567375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:15:50.316148  567375 out.go:235]   - Generating certificates and keys ...
	I0414 12:15:50.316217  567375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:15:50.316295  567375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:15:50.316380  567375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 12:15:50.316450  567375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 12:15:50.316548  567375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 12:15:50.316653  567375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 12:15:50.316746  567375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 12:15:50.316835  567375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 12:15:50.316942  567375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 12:15:50.317005  567375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 12:15:50.317040  567375 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 12:15:50.317086  567375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:15:50.317133  567375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:15:50.317180  567375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:15:50.317230  567375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:15:50.317288  567375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:15:50.317415  567375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:15:50.317492  567375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:15:50.317525  567375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:15:50.317593  567375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:15:50.319132  567375 out.go:235]   - Booting up control plane ...
	I0414 12:15:50.319215  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:15:50.319298  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:15:50.319374  567375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:15:50.319478  567375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:15:50.319619  567375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 12:15:50.319660  567375 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 12:15:50.319744  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.319956  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320056  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320241  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320326  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320504  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320593  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.320780  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.320883  567375 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 12:15:50.321042  567375 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 12:15:50.321060  567375 kubeadm.go:310] 
	I0414 12:15:50.321125  567375 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 12:15:50.321180  567375 kubeadm.go:310] 		timed out waiting for the condition
	I0414 12:15:50.321189  567375 kubeadm.go:310] 
	I0414 12:15:50.321243  567375 kubeadm.go:310] 	This error is likely caused by:
	I0414 12:15:50.321291  567375 kubeadm.go:310] 		- The kubelet is not running
	I0414 12:15:50.321409  567375 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 12:15:50.321418  567375 kubeadm.go:310] 
	I0414 12:15:50.321529  567375 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 12:15:50.321561  567375 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 12:15:50.321589  567375 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 12:15:50.321601  567375 kubeadm.go:310] 
	I0414 12:15:50.321700  567375 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 12:15:50.321774  567375 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 12:15:50.321780  567375 kubeadm.go:310] 
	I0414 12:15:50.321876  567375 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 12:15:50.321967  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 12:15:50.322037  567375 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 12:15:50.322099  567375 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 12:15:50.322146  567375 kubeadm.go:310] 
	I0414 12:15:50.322192  567375 kubeadm.go:394] duration metric: took 7m57.509642242s to StartCluster
	I0414 12:15:50.322260  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 12:15:50.322317  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 12:15:50.365321  567375 cri.go:89] found id: ""
	I0414 12:15:50.365360  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.365372  567375 logs.go:284] No container was found matching "kube-apiserver"
	I0414 12:15:50.365388  567375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 12:15:50.365462  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 12:15:50.399917  567375 cri.go:89] found id: ""
	I0414 12:15:50.399956  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.399969  567375 logs.go:284] No container was found matching "etcd"
	I0414 12:15:50.399977  567375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 12:15:50.400039  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 12:15:50.433841  567375 cri.go:89] found id: ""
	I0414 12:15:50.433889  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.433900  567375 logs.go:284] No container was found matching "coredns"
	I0414 12:15:50.433906  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 12:15:50.433962  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 12:15:50.472959  567375 cri.go:89] found id: ""
	I0414 12:15:50.472993  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.473001  567375 logs.go:284] No container was found matching "kube-scheduler"
	I0414 12:15:50.473008  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 12:15:50.473069  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 12:15:50.506397  567375 cri.go:89] found id: ""
	I0414 12:15:50.506434  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.506446  567375 logs.go:284] No container was found matching "kube-proxy"
	I0414 12:15:50.506454  567375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 12:15:50.506521  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 12:15:50.540645  567375 cri.go:89] found id: ""
	I0414 12:15:50.540672  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.540681  567375 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 12:15:50.540687  567375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 12:15:50.540765  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 12:15:50.574232  567375 cri.go:89] found id: ""
	I0414 12:15:50.574263  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.574272  567375 logs.go:284] No container was found matching "kindnet"
	I0414 12:15:50.574278  567375 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 12:15:50.574333  567375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 12:15:50.607014  567375 cri.go:89] found id: ""
	I0414 12:15:50.607044  567375 logs.go:282] 0 containers: []
	W0414 12:15:50.607051  567375 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 12:15:50.607063  567375 logs.go:123] Gathering logs for kubelet ...
	I0414 12:15:50.607075  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 12:15:50.660430  567375 logs.go:123] Gathering logs for dmesg ...
	I0414 12:15:50.660471  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 12:15:50.676411  567375 logs.go:123] Gathering logs for describe nodes ...
	I0414 12:15:50.676454  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 12:15:50.782951  567375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 12:15:50.782981  567375 logs.go:123] Gathering logs for CRI-O ...
	I0414 12:15:50.782994  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 12:15:50.886201  567375 logs.go:123] Gathering logs for container status ...
	I0414 12:15:50.886250  567375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 12:15:50.923193  567375 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 12:15:50.923259  567375 out.go:270] * 
	W0414 12:15:50.923378  567375 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.923400  567375 out.go:270] * 
	W0414 12:15:50.924263  567375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 12:15:50.927535  567375 out.go:201] 
	W0414 12:15:50.928729  567375 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 12:15:50.928768  567375 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 12:15:50.928787  567375 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 12:15:50.930136  567375 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.331621513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633847331600981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edc5f539-b083-404c-9415-a947dc470d26 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.332179152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e852f62b-1e8e-44ba-a60c-a9df0825148c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.332234667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e852f62b-1e8e-44ba-a60c-a9df0825148c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.332265241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e852f62b-1e8e-44ba-a60c-a9df0825148c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.362589990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7469818-3d53-49eb-a914-36792e76f12e name=/runtime.v1.RuntimeService/Version
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.362666119Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7469818-3d53-49eb-a914-36792e76f12e name=/runtime.v1.RuntimeService/Version
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.363526641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aaa416c7-f643-4601-a2c2-22aac9cf4ddd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.363905298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633847363884462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aaa416c7-f643-4601-a2c2-22aac9cf4ddd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.364322142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a38ff63a-5249-4155-a96f-0142090e8bf0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.364403798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a38ff63a-5249-4155-a96f-0142090e8bf0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.364456405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a38ff63a-5249-4155-a96f-0142090e8bf0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.397207487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08d11efa-54a8-46e8-9234-2af575109087 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.397284211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08d11efa-54a8-46e8-9234-2af575109087 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.398747161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09bc820d-3618-440e-9c6c-3c4a505c7593 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.399174842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633847399147062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09bc820d-3618-440e-9c6c-3c4a505c7593 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.399880002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b074b80-3106-4e38-9e5e-f25acc059ba7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.399947142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b074b80-3106-4e38-9e5e-f25acc059ba7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.399988777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8b074b80-3106-4e38-9e5e-f25acc059ba7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.430718255Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f9fc3a2-d772-4102-b718-58faceba4060 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.430819648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f9fc3a2-d772-4102-b718-58faceba4060 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.431695799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2891251-3f63-4538-a48a-987bb4fc8430 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.432066503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633847432047306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2891251-3f63-4538-a48a-987bb4fc8430 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.432563964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3122ed4-ad3d-4e60-afcb-c54b884fa299 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.432636201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3122ed4-ad3d-4e60-afcb-c54b884fa299 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:30:47 old-k8s-version-071646 crio[630]: time="2025-04-14 12:30:47.432669266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c3122ed4-ad3d-4e60-afcb-c54b884fa299 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038097] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.980944] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.014254] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.547396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.450729] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064215] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062646] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.183413] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.130846] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.237445] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.667185] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.071958] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.250554] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Apr14 12:08] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 12:11] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Apr14 12:13] systemd-fstab-generator[5319]: Ignoring "noauto" option for root device
	[  +0.058925] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:30:47 up 23 min,  0 users,  load average: 0.36, 0.12, 0.04
	Linux old-k8s-version-071646 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.socket(0x4f7fe40, 0xc000ba7140, 0x48ab5d6, 0x3, 0x2, 0x1, 0x0, 0x0, 0x4fb9160, 0x0, ...)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/sock_posix.go:70 +0x1c5
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.internetSocket(0x4f7fe40, 0xc000ba7140, 0x48ab5d6, 0x3, 0x4fb9160, 0x0, 0x4fb9160, 0xc000ba3c50, 0x1, 0x0, ...)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/ipsock_posix.go:141 +0x145
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.(*sysDialer).doDialTCP(0xc0006ccd80, 0x4f7fe40, 0xc000ba7140, 0x0, 0xc000ba3c50, 0x3fddce0, 0x70f9210, 0x0)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/tcpsock_posix.go:65 +0xc5
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.(*sysDialer).dialTCP(0xc0006ccd80, 0x4f7fe40, 0xc000ba7140, 0x0, 0xc000ba3c50, 0x57b620, 0x48ab5d6, 0x7f7c7c6637c0)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.(*sysDialer).dialSingle(0xc0006ccd80, 0x4f7fe40, 0xc000ba7140, 0x4f1ff00, 0xc000ba3c50, 0x0, 0x0, 0x0, 0x0)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.(*sysDialer).dialSerial(0xc0006ccd80, 0x4f7fe40, 0xc000ba7140, 0xc000be8140, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/dial.go:548 +0x152
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net.(*Dialer).DialContext(0xc000179920, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b64cf0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0009e2800, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b64cf0, 0x24, 0x60, 0x7f7ca4af6fc8, 0x118, ...)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net/http.(*Transport).dial(0xc0007d5400, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b64cf0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 12:30:47 old-k8s-version-071646 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net/http.(*Transport).dialConn(0xc0007d5400, 0x4f7fe00, 0xc000052030, 0x0, 0xc000bb24e0, 0x5, 0xc000b64cf0, 0x24, 0x0, 0xc0006cbc20, ...)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: net/http.(*Transport).dialConnFor(0xc0007d5400, 0xc0006fbce0)
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]: created by net/http.(*Transport).queueForDial
	Apr 14 12:30:47 old-k8s-version-071646 kubelet[7148]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 2 (227.839629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-071646" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.91s)

                                                
                                    

Test pass (271/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 4.42
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.15
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 81.06
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.66
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 23.82
37 TestAddons/parallel/InspektorGadget 11.08
38 TestAddons/parallel/MetricsServer 6.46
40 TestAddons/parallel/CSI 56.7
41 TestAddons/parallel/Headlamp 19.55
42 TestAddons/parallel/CloudSpanner 6.55
43 TestAddons/parallel/LocalPath 63.23
44 TestAddons/parallel/NvidiaDevicePlugin 6.65
45 TestAddons/parallel/Yakd 11.76
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 54.01
49 TestCertExpiration 284.09
51 TestForceSystemdFlag 41.86
52 TestForceSystemdEnv 43.53
54 TestKVMDriverInstallOrUpdate 3.89
58 TestErrorSpam/setup 39.05
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.73
61 TestErrorSpam/pause 1.56
62 TestErrorSpam/unpause 1.78
63 TestErrorSpam/stop 6.19
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.32
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 54.1
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
75 TestFunctional/serial/CacheCmd/cache/add_local 1.9
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 32.43
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.3
86 TestFunctional/serial/LogsFileCmd 1.31
87 TestFunctional/serial/InvalidService 3.92
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 18.21
91 TestFunctional/parallel/DryRun 0.32
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 1.2
97 TestFunctional/parallel/ServiceCmdConnect 9.63
98 TestFunctional/parallel/AddonsCmd 0.37
99 TestFunctional/parallel/PersistentVolumeClaim 37.92
101 TestFunctional/parallel/SSHCmd 0.4
102 TestFunctional/parallel/CpCmd 1.47
103 TestFunctional/parallel/MySQL 28.78
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.45
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
113 TestFunctional/parallel/License 0.26
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.84
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.62
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
120 TestFunctional/parallel/ImageCommands/ImageBuild 8.79
121 TestFunctional/parallel/ImageCommands/Setup 1.54
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
127 TestFunctional/parallel/ProfileCmd/profile_list 0.35
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.73
130 TestFunctional/parallel/MountCmd/any-port 9.99
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
137 TestFunctional/parallel/ServiceCmd/List 0.9
138 TestFunctional/parallel/MountCmd/specific-port 1.68
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
142 TestFunctional/parallel/ServiceCmd/Format 0.31
143 TestFunctional/parallel/ServiceCmd/URL 0.34
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 194.78
161 TestMultiControlPlane/serial/DeployApp 7.06
162 TestMultiControlPlane/serial/PingHostFromPods 1.23
163 TestMultiControlPlane/serial/AddWorkerNode 55.93
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
166 TestMultiControlPlane/serial/CopyFile 13.2
167 TestMultiControlPlane/serial/StopSecondaryNode 91.46
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
169 TestMultiControlPlane/serial/RestartSecondaryNode 50.02
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 467.67
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.26
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
174 TestMultiControlPlane/serial/StopCluster 272.35
175 TestMultiControlPlane/serial/RestartCluster 101.24
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
177 TestMultiControlPlane/serial/AddSecondaryNode 78.04
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
182 TestJSONOutput/start/Command 52.09
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.63
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.6
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.66
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 88.09
214 TestMountStart/serial/StartWithMountFirst 26.09
215 TestMountStart/serial/VerifyMountFirst 0.38
216 TestMountStart/serial/StartWithMountSecond 24.32
217 TestMountStart/serial/VerifyMountSecond 0.4
218 TestMountStart/serial/DeleteFirst 0.7
219 TestMountStart/serial/VerifyMountPostDelete 0.4
220 TestMountStart/serial/Stop 1.28
221 TestMountStart/serial/RestartStopped 22.95
222 TestMountStart/serial/VerifyMountPostStop 0.39
225 TestMultiNode/serial/FreshStart2Nodes 115.17
226 TestMultiNode/serial/DeployApp2Nodes 6.61
227 TestMultiNode/serial/PingHostFrom2Pods 0.78
228 TestMultiNode/serial/AddNode 51.48
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.57
231 TestMultiNode/serial/CopyFile 7.17
232 TestMultiNode/serial/StopNode 2.23
233 TestMultiNode/serial/StartAfterStop 38.61
234 TestMultiNode/serial/RestartKeepsNodes 341.8
235 TestMultiNode/serial/DeleteNode 2.76
236 TestMultiNode/serial/StopMultiNode 181.69
237 TestMultiNode/serial/RestartMultiNode 114.4
238 TestMultiNode/serial/ValidateNameConflict 43.03
245 TestScheduledStopUnix 114.02
249 TestRunningBinaryUpgrade 176.24
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 94.52
263 TestNetworkPlugins/group/false 3.2
267 TestStoppedBinaryUpgrade/Setup 0.42
268 TestStoppedBinaryUpgrade/Upgrade 150.14
269 TestNoKubernetes/serial/StartWithStopK8s 62.5
270 TestNoKubernetes/serial/Start 40.69
271 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
280 TestPause/serial/Start 54.68
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
282 TestNoKubernetes/serial/ProfileList 2.2
283 TestNoKubernetes/serial/Stop 1.75
284 TestNoKubernetes/serial/StartNoArgs 38.95
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
287 TestNetworkPlugins/group/auto/Start 82.49
288 TestNetworkPlugins/group/kindnet/Start 71.28
289 TestNetworkPlugins/group/auto/KubeletFlags 0.21
290 TestNetworkPlugins/group/auto/NetCatPod 11.25
291 TestNetworkPlugins/group/auto/DNS 0.14
292 TestNetworkPlugins/group/auto/Localhost 0.12
293 TestNetworkPlugins/group/auto/HairPin 0.12
294 TestNetworkPlugins/group/calico/Start 78.48
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
297 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
298 TestNetworkPlugins/group/kindnet/DNS 0.21
299 TestNetworkPlugins/group/kindnet/Localhost 0.19
300 TestNetworkPlugins/group/kindnet/HairPin 0.23
301 TestNetworkPlugins/group/custom-flannel/Start 70.42
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.25
304 TestNetworkPlugins/group/calico/NetCatPod 10.31
305 TestNetworkPlugins/group/enable-default-cni/Start 66.17
306 TestNetworkPlugins/group/calico/DNS 0.15
307 TestNetworkPlugins/group/calico/Localhost 0.12
308 TestNetworkPlugins/group/calico/HairPin 0.11
309 TestNetworkPlugins/group/flannel/Start 88.29
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
312 TestNetworkPlugins/group/custom-flannel/DNS 0.2
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.25
317 TestNetworkPlugins/group/enable-default-cni/DNS 21.5
318 TestNetworkPlugins/group/bridge/Start 85.07
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
325 TestNetworkPlugins/group/flannel/NetCatPod 12.27
327 TestStartStop/group/no-preload/serial/FirstStart 92.16
328 TestNetworkPlugins/group/flannel/DNS 0.16
329 TestNetworkPlugins/group/flannel/Localhost 0.12
330 TestNetworkPlugins/group/flannel/HairPin 0.11
332 TestStartStop/group/embed-certs/serial/FirstStart 106.53
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
334 TestNetworkPlugins/group/bridge/NetCatPod 12.25
335 TestNetworkPlugins/group/bridge/DNS 0.16
336 TestNetworkPlugins/group/bridge/Localhost 0.13
337 TestNetworkPlugins/group/bridge/HairPin 0.13
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.77
340 TestStartStop/group/no-preload/serial/DeployApp 10.29
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
342 TestStartStop/group/no-preload/serial/Stop 90.82
343 TestStartStop/group/embed-certs/serial/DeployApp 12.27
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
345 TestStartStop/group/embed-certs/serial/Stop 91.01
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.26
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.01
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
350 TestStartStop/group/no-preload/serial/SecondStart 346.6
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
352 TestStartStop/group/embed-certs/serial/SecondStart 301.36
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
356 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 333.44
357 TestStartStop/group/old-k8s-version/serial/Stop 1.32
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
364 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
365 TestStartStop/group/embed-certs/serial/Pause 2.68
367 TestStartStop/group/newest-cni/serial/FirstStart 47.5
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
369 TestStartStop/group/no-preload/serial/Pause 2.8
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.57
373 TestStartStop/group/newest-cni/serial/Stop 11.36
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
376 TestStartStop/group/newest-cni/serial/SecondStart 35.83
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.76
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
382 TestStartStop/group/newest-cni/serial/Pause 3.02
x
+
TestDownloadOnly/v1.20.0/json-events (10.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-858677 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-858677 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.772635833s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 10:51:20.005940  510444 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 10:51:20.006049  510444 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-858677
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-858677: exit status 85 (64.633423ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-858677 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC |          |
	|         | -p download-only-858677        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 10:51:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 10:51:09.276789  510456 out.go:345] Setting OutFile to fd 1 ...
	I0414 10:51:09.277026  510456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 10:51:09.277034  510456 out.go:358] Setting ErrFile to fd 2...
	I0414 10:51:09.277038  510456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 10:51:09.277239  510456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	W0414 10:51:09.277365  510456 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20534-503273/.minikube/config/config.json: open /home/jenkins/minikube-integration/20534-503273/.minikube/config/config.json: no such file or directory
	I0414 10:51:09.277928  510456 out.go:352] Setting JSON to true
	I0414 10:51:09.278879  510456 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16420,"bootTime":1744611449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 10:51:09.278941  510456 start.go:139] virtualization: kvm guest
	I0414 10:51:09.281283  510456 out.go:97] [download-only-858677] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 10:51:09.281858  510456 notify.go:220] Checking for updates...
	W0414 10:51:09.281875  510456 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 10:51:09.282834  510456 out.go:169] MINIKUBE_LOCATION=20534
	I0414 10:51:09.284602  510456 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 10:51:09.285986  510456 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 10:51:09.287463  510456 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 10:51:09.288833  510456 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 10:51:09.291369  510456 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 10:51:09.291576  510456 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 10:51:09.327550  510456 out.go:97] Using the kvm2 driver based on user configuration
	I0414 10:51:09.327584  510456 start.go:297] selected driver: kvm2
	I0414 10:51:09.327596  510456 start.go:901] validating driver "kvm2" against <nil>
	I0414 10:51:09.327938  510456 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 10:51:09.328034  510456 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20534-503273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 10:51:09.343990  510456 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 10:51:09.344050  510456 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 10:51:09.344631  510456 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 10:51:09.344785  510456 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 10:51:09.344827  510456 cni.go:84] Creating CNI manager for ""
	I0414 10:51:09.344878  510456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 10:51:09.344887  510456 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 10:51:09.344962  510456 start.go:340] cluster config:
	{Name:download-only-858677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-858677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 10:51:09.345149  510456 iso.go:125] acquiring lock: {Name:mkf550e25722092d7ac6a73b4b8e9a32a81cf3e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 10:51:09.347059  510456 out.go:97] Downloading VM boot image ...
	I0414 10:51:09.347093  510456 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20534-503273/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 10:51:13.154701  510456 out.go:97] Starting "download-only-858677" primary control-plane node in "download-only-858677" cluster
	I0414 10:51:13.154746  510456 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 10:51:13.186711  510456 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 10:51:13.186761  510456 cache.go:56] Caching tarball of preloaded images
	I0414 10:51:13.186950  510456 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 10:51:13.189099  510456 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 10:51:13.189133  510456 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0414 10:51:13.215457  510456 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 10:51:18.470510  510456 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0414 10:51:18.470600  510456 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-858677 host does not exist
	  To start a cluster, run: "minikube start -p download-only-858677"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-858677
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-702527 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-702527 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.416303506s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 10:51:24.770371  510444 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 10:51:24.770410  510444 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-503273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-702527
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-702527: exit status 85 (64.382505ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-858677 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC |                     |
	|         | -p download-only-858677        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:51 UTC |
	| delete  | -p download-only-858677        | download-only-858677 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC | 14 Apr 25 10:51 UTC |
	| start   | -o=json --download-only        | download-only-702527 | jenkins | v1.35.0 | 14 Apr 25 10:51 UTC |                     |
	|         | -p download-only-702527        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 10:51:20
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 10:51:20.394996  510675 out.go:345] Setting OutFile to fd 1 ...
	I0414 10:51:20.395322  510675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 10:51:20.395333  510675 out.go:358] Setting ErrFile to fd 2...
	I0414 10:51:20.395337  510675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 10:51:20.395511  510675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 10:51:20.396071  510675 out.go:352] Setting JSON to true
	I0414 10:51:20.397554  510675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16431,"bootTime":1744611449,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 10:51:20.398060  510675 start.go:139] virtualization: kvm guest
	I0414 10:51:20.399972  510675 out.go:97] [download-only-702527] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 10:51:20.400160  510675 notify.go:220] Checking for updates...
	I0414 10:51:20.401535  510675 out.go:169] MINIKUBE_LOCATION=20534
	I0414 10:51:20.403095  510675 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 10:51:20.404308  510675 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 10:51:20.405570  510675 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 10:51:20.406725  510675 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-702527 host does not exist
	  To start a cluster, run: "minikube start -p download-only-702527"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-702527
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 10:51:25.387704  510444 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-251180 --alsologtostderr --binary-mirror http://127.0.0.1:38675 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-251180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-251180
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (81.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-209305 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-209305 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.978122285s)
helpers_test.go:175: Cleaning up "offline-crio-209305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-209305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-209305: (1.086291631s)
--- PASS: TestOffline (81.06s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-345184
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-345184: exit status 85 (55.888104ms)

                                                
                                                
-- stdout --
	* Profile "addons-345184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-345184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-345184
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-345184: exit status 85 (54.328737ms)

                                                
                                                
-- stdout --
	* Profile "addons-345184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-345184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-345184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-345184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.656522716s)
--- PASS: TestAddons/Setup (133.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-345184 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-345184 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-345184 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-345184 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5df983c7-1248-4362-a462-b532a47b1844] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5df983c7-1248-4362-a462-b532a47b1844] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003691655s
addons_test.go:633: (dbg) Run:  kubectl --context addons-345184 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-345184 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-345184 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.926896ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-mhchq" [83fd759f-b12a-4f2b-8cbd-dc2dde8f4950] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004851277s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xxh28" [d81741d1-e338-403d-b33c-fd7170977c50] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005091801s
addons_test.go:331: (dbg) Run:  kubectl --context addons-345184 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-345184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-345184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.859601037s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 ip
2025/04/14 10:54:22 [DEBUG] GET http://192.168.39.54:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-drnd8" [5ebfafba-71a4-4e04-bb2d-21fcc755c626] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004426363s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable inspektor-gadget --alsologtostderr -v=1: (6.073377809s)
--- PASS: TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 8.368331ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-f2z9r" [dd686ce1-d4c9-4679-b521-c5a1faaf9cb3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004689232s
addons_test.go:402: (dbg) Run:  kubectl --context addons-345184 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable metrics-server --alsologtostderr -v=1: (1.377854722s)
--- PASS: TestAddons/parallel/MetricsServer (6.46s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.69962ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-345184 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-345184 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0848668e-13d3-4dfc-9ab8-9fab1b6fe5a1] Pending
helpers_test.go:344: "task-pv-pod" [0848668e-13d3-4dfc-9ab8-9fab1b6fe5a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0848668e-13d3-4dfc-9ab8-9fab1b6fe5a1] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007137614s
addons_test.go:511: (dbg) Run:  kubectl --context addons-345184 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-345184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-345184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-345184 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-345184 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-345184 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-345184 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2040774e-5f10-40e1-9bef-f49f9d881bb2] Pending
helpers_test.go:344: "task-pv-pod-restore" [2040774e-5f10-40e1-9bef-f49f9d881bb2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2040774e-5f10-40e1-9bef-f49f9d881bb2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003937834s
addons_test.go:553: (dbg) Run:  kubectl --context addons-345184 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-345184 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-345184 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable volumesnapshots --alsologtostderr -v=1: (1.065207017s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.857052135s)
--- PASS: TestAddons/parallel/CSI (56.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-345184 --alsologtostderr -v=1
I0414 10:53:59.224090  510444 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 10:53:59.231720  510444 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 10:53:59.231762  510444 kapi.go:107] duration metric: took 7.679766ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-345184 --alsologtostderr -v=1: (1.316053289s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-jt7ls" [4064f297-02e9-466d-baf7-3d158a5c0304] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-jt7ls" [4064f297-02e9-466d-baf7-3d158a5c0304] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-jt7ls" [4064f297-02e9-466d-baf7-3d158a5c0304] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004358516s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable headlamp --alsologtostderr -v=1: (6.226302302s)
--- PASS: TestAddons/parallel/Headlamp (19.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-6mqff" [416b9696-2b3f-4855-a039-b23faa08f181] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004001946s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (63.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-345184 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-345184 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0f042991-f2f1-45f1-8630-1440e6ec02ef] Pending
helpers_test.go:344: "test-local-path" [0f042991-f2f1-45f1-8630-1440e6ec02ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0f042991-f2f1-45f1-8630-1440e6ec02ef] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0f042991-f2f1-45f1-8630-1440e6ec02ef] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 13.005052033s
addons_test.go:906: (dbg) Run:  kubectl --context addons-345184 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 ssh "cat /opt/local-path-provisioner/pvc-120ee6d6-8650-470a-9b85-c8e61a164c70_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-345184 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-345184 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.36444763s)
--- PASS: TestAddons/parallel/LocalPath (63.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9t2t4" [4dd6cb16-270e-4be0-b1ed-35041c492ac3] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002458943s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-jmlsh" [9ab06232-8f2f-430f-9681-eeb717699116] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005165377s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-345184 addons disable yakd --alsologtostderr -v=1: (5.750783626s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-345184
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-345184: (1m30.972412506s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-345184
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-345184
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-345184
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (54.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-494137 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-494137 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (52.757625638s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-494137 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-494137 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-494137 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-494137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-494137
--- PASS: TestCertOptions (54.01s)

                                                
                                    
x
+
TestCertExpiration (284.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623032 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623032 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m2.040382805s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623032 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623032 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.943717595s)
helpers_test.go:175: Cleaning up "cert-expiration-623032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-623032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-623032: (1.10048073s)
--- PASS: TestCertExpiration (284.09s)

                                                
                                    
x
+
TestForceSystemdFlag (41.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-567758 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-567758 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.562346617s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-567758 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-567758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-567758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-567758: (1.065087961s)
--- PASS: TestForceSystemdFlag (41.86s)

                                                
                                    
x
+
TestForceSystemdEnv (43.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-233929 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-233929 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.505268502s)
helpers_test.go:175: Cleaning up "force-systemd-env-233929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-233929
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-233929: (1.02670922s)
--- PASS: TestForceSystemdEnv (43.53s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.89s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 11:54:03.919588  510444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 11:54:03.919787  510444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 11:54:03.956368  510444 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 11:54:03.956596  510444 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 11:54:03.956682  510444 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1588468655/001/docker-machine-driver-kvm2
I0414 11:54:04.174824  510444 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1588468655/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005f4ee8 gz:0xc0005f4f90 tar:0xc0005f4f40 tar.bz2:0xc0005f4f50 tar.gz:0xc0005f4f60 tar.xz:0xc0005f4f70 tar.zst:0xc0005f4f80 tbz2:0xc0005f4f50 tgz:0xc0005f4f60 txz:0xc0005f4f70 tzst:0xc0005f4f80 xz:0xc0005f4f98 zip:0xc0005f4fa0 zst:0xc0005f4fc0] Getters:map[file:0xc001f32420 http:0xc001e6e4b0 https:0xc001e6e500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 11:54:04.174885  510444 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1588468655/001/docker-machine-driver-kvm2
I0414 11:54:06.084296  510444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 11:54:06.084388  510444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 11:54:06.116983  510444 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 11:54:06.117016  510444 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 11:54:06.117079  510444 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 11:54:06.117106  510444 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1588468655/002/docker-machine-driver-kvm2
I0414 11:54:06.166226  510444 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1588468655/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005f4ee8 gz:0xc0005f4f90 tar:0xc0005f4f40 tar.bz2:0xc0005f4f50 tar.gz:0xc0005f4f60 tar.xz:0xc0005f4f70 tar.zst:0xc0005f4f80 tbz2:0xc0005f4f50 tgz:0xc0005f4f60 txz:0xc0005f4f70 tzst:0xc0005f4f80 xz:0xc0005f4f98 zip:0xc0005f4fa0 zst:0xc0005f4fc0] Getters:map[file:0xc001d14c10 http:0xc001d09270 https:0xc001d092c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 11:54:06.166272  510444 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1588468655/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.89s)

                                                
                                    
x
+
TestErrorSpam/setup (39.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-280906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-280906 --driver=kvm2  --container-runtime=crio
E0414 10:58:40.365739  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:40.372183  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:40.383601  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:40.405022  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:40.446448  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:40.527963  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:40.689571  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:41.011327  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:41.653402  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:42.935086  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:45.498012  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:58:50.619525  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 10:59:00.861457  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-280906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-280906 --driver=kvm2  --container-runtime=crio: (39.04642457s)
--- PASS: TestErrorSpam/setup (39.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (6.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 stop: (2.311892864s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 stop: (1.937151305s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-280906 --log_dir /tmp/nospam-280906 stop: (1.938740199s)
--- PASS: TestErrorSpam/stop (6.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20534-503273/.minikube/files/etc/test/nested/copy/510444/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-575216 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0414 10:59:21.343555  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:00:02.306661  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-575216 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.317689374s)
--- PASS: TestFunctional/serial/StartWithProxy (82.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 11:00:39.605322  510444 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-575216 --alsologtostderr -v=8
E0414 11:01:24.228252  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-575216 --alsologtostderr -v=8: (54.097201341s)
functional_test.go:680: soft start took 54.097982645s for "functional-575216" cluster.
I0414 11:01:33.702855  510444 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (54.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-575216 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 cache add registry.k8s.io/pause:3.1: (1.022837476s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 cache add registry.k8s.io/pause:3.3: (1.116892288s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 cache add registry.k8s.io/pause:latest: (1.067838129s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-575216 /tmp/TestFunctionalserialCacheCmdcacheadd_local1245703401/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cache add minikube-local-cache-test:functional-575216
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 cache add minikube-local-cache-test:functional-575216: (1.581526132s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cache delete minikube-local-cache-test:functional-575216
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-575216
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.185774ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 kubectl -- --context functional-575216 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-575216 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-575216 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-575216 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.430235534s)
functional_test.go:778: restart took 32.43035764s for "functional-575216" cluster.
I0414 11:02:13.650794  510444 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (32.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-575216 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 logs: (1.302477062s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 logs --file /tmp/TestFunctionalserialLogsFileCmd2422086802/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 logs --file /tmp/TestFunctionalserialLogsFileCmd2422086802/001/logs.txt: (1.308362998s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-575216 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-575216
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-575216: exit status 115 (267.990627ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.89:31281 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-575216 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 config get cpus: exit status 14 (54.1475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 config get cpus: exit status 14 (60.568998ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-575216 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-575216 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 517766: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-575216 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-575216 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (169.19221ms)

                                                
                                                
-- stdout --
	* [functional-575216] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:02:23.432929  517612 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:02:23.433082  517612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:02:23.433098  517612 out.go:358] Setting ErrFile to fd 2...
	I0414 11:02:23.433106  517612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:02:23.433396  517612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:02:23.434036  517612 out.go:352] Setting JSON to false
	I0414 11:02:23.435381  517612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17094,"bootTime":1744611449,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:02:23.435472  517612 start.go:139] virtualization: kvm guest
	I0414 11:02:23.438098  517612 out.go:177] * [functional-575216] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:02:23.439463  517612 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:02:23.439465  517612 notify.go:220] Checking for updates...
	I0414 11:02:23.440756  517612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:02:23.442124  517612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:02:23.443378  517612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:02:23.444529  517612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:02:23.445745  517612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:02:23.447341  517612 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:02:23.447773  517612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:02:23.447899  517612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:02:23.469856  517612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0414 11:02:23.470389  517612 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:02:23.471152  517612 main.go:141] libmachine: Using API Version  1
	I0414 11:02:23.471179  517612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:02:23.471655  517612 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:02:23.471858  517612 main.go:141] libmachine: (functional-575216) Calling .DriverName
	I0414 11:02:23.472183  517612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:02:23.472718  517612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:02:23.472783  517612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:02:23.493948  517612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0414 11:02:23.494530  517612 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:02:23.495119  517612 main.go:141] libmachine: Using API Version  1
	I0414 11:02:23.495154  517612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:02:23.495559  517612 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:02:23.495764  517612 main.go:141] libmachine: (functional-575216) Calling .DriverName
	I0414 11:02:23.532301  517612 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 11:02:23.533665  517612 start.go:297] selected driver: kvm2
	I0414 11:02:23.533689  517612 start.go:901] validating driver "kvm2" against &{Name:functional-575216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-575216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:02:23.533866  517612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:02:23.536036  517612 out.go:201] 
	W0414 11:02:23.537114  517612 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 11:02:23.538166  517612 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-575216 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-575216 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-575216 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.042162ms)

                                                
                                                
-- stdout --
	* [functional-575216] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:02:23.275559  517577 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:02:23.275926  517577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:02:23.275941  517577 out.go:358] Setting ErrFile to fd 2...
	I0414 11:02:23.275948  517577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:02:23.276740  517577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:02:23.277438  517577 out.go:352] Setting JSON to false
	I0414 11:02:23.278406  517577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17094,"bootTime":1744611449,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:02:23.278511  517577 start.go:139] virtualization: kvm guest
	I0414 11:02:23.280285  517577 out.go:177] * [functional-575216] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 11:02:23.281351  517577 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:02:23.281374  517577 notify.go:220] Checking for updates...
	I0414 11:02:23.283538  517577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:02:23.284647  517577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:02:23.285634  517577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:02:23.286529  517577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:02:23.287442  517577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:02:23.288831  517577 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:02:23.289301  517577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:02:23.289401  517577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:02:23.305826  517577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0414 11:02:23.306408  517577 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:02:23.307021  517577 main.go:141] libmachine: Using API Version  1
	I0414 11:02:23.307055  517577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:02:23.307517  517577 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:02:23.307716  517577 main.go:141] libmachine: (functional-575216) Calling .DriverName
	I0414 11:02:23.308052  517577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:02:23.308544  517577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:02:23.308605  517577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:02:23.324799  517577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0414 11:02:23.325304  517577 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:02:23.325870  517577 main.go:141] libmachine: Using API Version  1
	I0414 11:02:23.325899  517577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:02:23.326253  517577 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:02:23.326449  517577 main.go:141] libmachine: (functional-575216) Calling .DriverName
	I0414 11:02:23.360997  517577 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 11:02:23.362101  517577 start.go:297] selected driver: kvm2
	I0414 11:02:23.362121  517577 start.go:901] validating driver "kvm2" against &{Name:functional-575216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-575216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:02:23.362254  517577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:02:23.364658  517577 out.go:201] 
	W0414 11:02:23.365970  517577 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 11:02:23.367130  517577 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-575216 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-575216 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vzrv9" [930f58c2-8e8f-4075-a740-a7943abad1ac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vzrv9" [930f58c2-8e8f-4075-a740-a7943abad1ac] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.00344077s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.89:30755
functional_test.go:1692: http://192.168.39.89:30755: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-vzrv9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.89:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.89:30755
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b1f53de9-c63b-439a-8d86-291f7aeeff12] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003914839s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-575216 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-575216 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-575216 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-575216 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [613a95fd-4f0a-4c39-bb4e-5ed7521397b1] Pending
2025/04/14 11:02:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [613a95fd-4f0a-4c39-bb4e-5ed7521397b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [613a95fd-4f0a-4c39-bb4e-5ed7521397b1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003432093s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-575216 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-575216 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-575216 delete -f testdata/storage-provisioner/pod.yaml: (1.208634969s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-575216 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0f1934ec-3d3a-4274-b6aa-2e8a9f943cba] Pending
helpers_test.go:344: "sp-pod" [0f1934ec-3d3a-4274-b6aa-2e8a9f943cba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0f1934ec-3d3a-4274-b6aa-2e8a9f943cba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003277252s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-575216 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh -n functional-575216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cp functional-575216:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2238297641/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh -n functional-575216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh -n functional-575216 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-575216 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-9ss9l" [93786479-3ecc-45ab-b2c9-4d12dfe7810d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-9ss9l" [93786479-3ecc-45ab-b2c9-4d12dfe7810d] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.003354362s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-575216 exec mysql-58ccfd96bb-9ss9l -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-575216 exec mysql-58ccfd96bb-9ss9l -- mysql -ppassword -e "show databases;": exit status 1 (116.35977ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 11:02:58.487130  510444 retry.go:31] will retry after 1.328240041s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-575216 exec mysql-58ccfd96bb-9ss9l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/510444/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /etc/test/nested/copy/510444/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/510444.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /etc/ssl/certs/510444.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/510444.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /usr/share/ca-certificates/510444.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/5104442.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /etc/ssl/certs/5104442.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/5104442.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /usr/share/ca-certificates/5104442.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-575216 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh "sudo systemctl is-active docker": exit status 1 (217.83103ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh "sudo systemctl is-active containerd": exit status 1 (233.032515ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-575216 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-575216
localhost/kicbase/echo-server:functional-575216
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-575216 image ls --format short --alsologtostderr:
I0414 11:02:43.154913  519227 out.go:345] Setting OutFile to fd 1 ...
I0414 11:02:43.155197  519227 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:43.155209  519227 out.go:358] Setting ErrFile to fd 2...
I0414 11:02:43.155213  519227 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:43.155473  519227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
I0414 11:02:43.156059  519227 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:43.156169  519227 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:43.156576  519227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:43.156662  519227 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:43.172821  519227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
I0414 11:02:43.173308  519227 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:43.174412  519227 main.go:141] libmachine: Using API Version  1
I0414 11:02:43.174449  519227 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:43.175559  519227 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:43.175959  519227 main.go:141] libmachine: (functional-575216) Calling .GetState
I0414 11:02:43.178035  519227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:43.178079  519227 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:43.193456  519227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
I0414 11:02:43.193865  519227 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:43.194335  519227 main.go:141] libmachine: Using API Version  1
I0414 11:02:43.194364  519227 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:43.194723  519227 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:43.194927  519227 main.go:141] libmachine: (functional-575216) Calling .DriverName
I0414 11:02:43.195128  519227 ssh_runner.go:195] Run: systemctl --version
I0414 11:02:43.195167  519227 main.go:141] libmachine: (functional-575216) Calling .GetSSHHostname
I0414 11:02:43.198221  519227 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:43.198606  519227 main.go:141] libmachine: (functional-575216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:cf:6a", ip: ""} in network mk-functional-575216: {Iface:virbr1 ExpiryTime:2025-04-14 11:59:31 +0000 UTC Type:0 Mac:52:54:00:ce:cf:6a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:functional-575216 Clientid:01:52:54:00:ce:cf:6a}
I0414 11:02:43.198634  519227 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined IP address 192.168.39.89 and MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:43.198830  519227 main.go:141] libmachine: (functional-575216) Calling .GetSSHPort
I0414 11:02:43.199012  519227 main.go:141] libmachine: (functional-575216) Calling .GetSSHKeyPath
I0414 11:02:43.199186  519227 main.go:141] libmachine: (functional-575216) Calling .GetSSHUsername
I0414 11:02:43.199358  519227 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/functional-575216/id_rsa Username:docker}
I0414 11:02:43.294161  519227 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:02:43.717858  519227 main.go:141] libmachine: Making call to close driver server
I0414 11:02:43.717871  519227 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:43.718164  519227 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:43.718185  519227 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:43.718196  519227 main.go:141] libmachine: Making call to close driver server
I0414 11:02:43.718204  519227 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:43.718448  519227 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:43.718467  519227 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:43.718472  519227 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-575216 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-575216  | 06aab7c7d8b0e | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-575216  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-575216 image ls --format table --alsologtostderr:
I0414 11:02:46.342043  519417 out.go:345] Setting OutFile to fd 1 ...
I0414 11:02:46.342365  519417 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:46.342378  519417 out.go:358] Setting ErrFile to fd 2...
I0414 11:02:46.342385  519417 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:46.342660  519417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
I0414 11:02:46.343460  519417 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:46.343611  519417 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:46.344169  519417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:46.344249  519417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:46.360244  519417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
I0414 11:02:46.360741  519417 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:46.361440  519417 main.go:141] libmachine: Using API Version  1
I0414 11:02:46.361483  519417 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:46.361883  519417 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:46.362109  519417 main.go:141] libmachine: (functional-575216) Calling .GetState
I0414 11:02:46.364236  519417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:46.364285  519417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:46.379823  519417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34761
I0414 11:02:46.380300  519417 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:46.380786  519417 main.go:141] libmachine: Using API Version  1
I0414 11:02:46.380829  519417 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:46.381242  519417 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:46.381444  519417 main.go:141] libmachine: (functional-575216) Calling .DriverName
I0414 11:02:46.381663  519417 ssh_runner.go:195] Run: systemctl --version
I0414 11:02:46.381698  519417 main.go:141] libmachine: (functional-575216) Calling .GetSSHHostname
I0414 11:02:46.384953  519417 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:46.385378  519417 main.go:141] libmachine: (functional-575216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:cf:6a", ip: ""} in network mk-functional-575216: {Iface:virbr1 ExpiryTime:2025-04-14 11:59:31 +0000 UTC Type:0 Mac:52:54:00:ce:cf:6a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:functional-575216 Clientid:01:52:54:00:ce:cf:6a}
I0414 11:02:46.385412  519417 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined IP address 192.168.39.89 and MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:46.385533  519417 main.go:141] libmachine: (functional-575216) Calling .GetSSHPort
I0414 11:02:46.385729  519417 main.go:141] libmachine: (functional-575216) Calling .GetSSHKeyPath
I0414 11:02:46.385888  519417 main.go:141] libmachine: (functional-575216) Calling .GetSSHUsername
I0414 11:02:46.386050  519417 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/functional-575216/id_rsa Username:docker}
I0414 11:02:46.484249  519417 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:02:46.546992  519417 main.go:141] libmachine: Making call to close driver server
I0414 11:02:46.547009  519417 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:46.547345  519417 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:46.547410  519417 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:46.547424  519417 main.go:141] libmachine: Making call to close driver server
I0414 11:02:46.547432  519417 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:46.547371  519417 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
I0414 11:02:46.547686  519417 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:46.547705  519417 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:46.547740  519417 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-575216 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"06aab7c7d8b0e1fb469a0aedd463ffab62806c61bd6be0b5b535beda293c71ae","repoDigests":["localhost/minikube-local-cache-test@sha256:73caeac62190baddf1fc2bb7791d6965d6859197091cda2a1e629b1642b8109c"],"repoTags":["localhost/minikube-local-cache-test:functional-575216"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause
@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec1955
4caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-ser
ver@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-575216"],"size":"4943877"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["re
gistry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab98
9956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-575216 image ls --format json --alsologtostderr:
I0414 11:02:46.072232  519377 out.go:345] Setting OutFile to fd 1 ...
I0414 11:02:46.072371  519377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:46.072383  519377 out.go:358] Setting ErrFile to fd 2...
I0414 11:02:46.072389  519377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:46.072590  519377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
I0414 11:02:46.073197  519377 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:46.073298  519377 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:46.073645  519377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:46.073712  519377 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:46.089975  519377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
I0414 11:02:46.090556  519377 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:46.091149  519377 main.go:141] libmachine: Using API Version  1
I0414 11:02:46.091174  519377 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:46.091559  519377 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:46.091748  519377 main.go:141] libmachine: (functional-575216) Calling .GetState
I0414 11:02:46.093847  519377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:46.093899  519377 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:46.110360  519377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
I0414 11:02:46.110936  519377 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:46.111539  519377 main.go:141] libmachine: Using API Version  1
I0414 11:02:46.111565  519377 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:46.111943  519377 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:46.112122  519377 main.go:141] libmachine: (functional-575216) Calling .DriverName
I0414 11:02:46.112355  519377 ssh_runner.go:195] Run: systemctl --version
I0414 11:02:46.112387  519377 main.go:141] libmachine: (functional-575216) Calling .GetSSHHostname
I0414 11:02:46.115199  519377 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:46.115631  519377 main.go:141] libmachine: (functional-575216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:cf:6a", ip: ""} in network mk-functional-575216: {Iface:virbr1 ExpiryTime:2025-04-14 11:59:31 +0000 UTC Type:0 Mac:52:54:00:ce:cf:6a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:functional-575216 Clientid:01:52:54:00:ce:cf:6a}
I0414 11:02:46.115669  519377 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined IP address 192.168.39.89 and MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:46.115878  519377 main.go:141] libmachine: (functional-575216) Calling .GetSSHPort
I0414 11:02:46.116104  519377 main.go:141] libmachine: (functional-575216) Calling .GetSSHKeyPath
I0414 11:02:46.116268  519377 main.go:141] libmachine: (functional-575216) Calling .GetSSHUsername
I0414 11:02:46.116409  519377 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/functional-575216/id_rsa Username:docker}
I0414 11:02:46.225474  519377 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:02:46.282342  519377 main.go:141] libmachine: Making call to close driver server
I0414 11:02:46.282357  519377 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:46.282720  519377 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:46.282739  519377 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:46.282757  519377 main.go:141] libmachine: Making call to close driver server
I0414 11:02:46.282767  519377 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:46.283022  519377 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:46.283038  519377 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:46.283040  519377 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-575216 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 06aab7c7d8b0e1fb469a0aedd463ffab62806c61bd6be0b5b535beda293c71ae
repoDigests:
- localhost/minikube-local-cache-test@sha256:73caeac62190baddf1fc2bb7791d6965d6859197091cda2a1e629b1642b8109c
repoTags:
- localhost/minikube-local-cache-test:functional-575216
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-575216
size: "4943877"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-575216 image ls --format yaml --alsologtostderr:
I0414 11:02:43.784921  519250 out.go:345] Setting OutFile to fd 1 ...
I0414 11:02:43.785304  519250 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:43.785328  519250 out.go:358] Setting ErrFile to fd 2...
I0414 11:02:43.785337  519250 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:43.785617  519250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
I0414 11:02:43.786524  519250 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:43.786715  519250 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:43.787330  519250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:43.787426  519250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:43.804993  519250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
I0414 11:02:43.805474  519250 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:43.806070  519250 main.go:141] libmachine: Using API Version  1
I0414 11:02:43.806098  519250 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:43.806492  519250 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:43.806720  519250 main.go:141] libmachine: (functional-575216) Calling .GetState
I0414 11:02:43.808950  519250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:43.809011  519250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:43.825216  519250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45387
I0414 11:02:43.825741  519250 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:43.826203  519250 main.go:141] libmachine: Using API Version  1
I0414 11:02:43.826224  519250 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:43.826531  519250 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:43.826711  519250 main.go:141] libmachine: (functional-575216) Calling .DriverName
I0414 11:02:43.826985  519250 ssh_runner.go:195] Run: systemctl --version
I0414 11:02:43.827012  519250 main.go:141] libmachine: (functional-575216) Calling .GetSSHHostname
I0414 11:02:43.830157  519250 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:43.830682  519250 main.go:141] libmachine: (functional-575216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:cf:6a", ip: ""} in network mk-functional-575216: {Iface:virbr1 ExpiryTime:2025-04-14 11:59:31 +0000 UTC Type:0 Mac:52:54:00:ce:cf:6a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:functional-575216 Clientid:01:52:54:00:ce:cf:6a}
I0414 11:02:43.830719  519250 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined IP address 192.168.39.89 and MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:43.830852  519250 main.go:141] libmachine: (functional-575216) Calling .GetSSHPort
I0414 11:02:43.831147  519250 main.go:141] libmachine: (functional-575216) Calling .GetSSHKeyPath
I0414 11:02:43.831324  519250 main.go:141] libmachine: (functional-575216) Calling .GetSSHUsername
I0414 11:02:43.831500  519250 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/functional-575216/id_rsa Username:docker}
I0414 11:02:43.929310  519250 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:02:44.061978  519250 main.go:141] libmachine: Making call to close driver server
I0414 11:02:44.062000  519250 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:44.062324  519250 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:44.062342  519250 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
I0414 11:02:44.062346  519250 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:44.062378  519250 main.go:141] libmachine: Making call to close driver server
I0414 11:02:44.062386  519250 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:44.062662  519250 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:44.062679  519250 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:44.062739  519250 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh pgrep buildkitd: exit status 1 (252.43818ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image build -t localhost/my-image:functional-575216 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 image build -t localhost/my-image:functional-575216 testdata/build --alsologtostderr: (8.273760591s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-575216 image build -t localhost/my-image:functional-575216 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 16f29eb6f02
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-575216
--> e8428be2c3c
Successfully tagged localhost/my-image:functional-575216
e8428be2c3c2cd11c9404b1c11df64a922518deab83a965c0b9d3c1e3ac674b8
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-575216 image build -t localhost/my-image:functional-575216 testdata/build --alsologtostderr:
I0414 11:02:44.376706  519303 out.go:345] Setting OutFile to fd 1 ...
I0414 11:02:44.376990  519303 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:44.377000  519303 out.go:358] Setting ErrFile to fd 2...
I0414 11:02:44.377004  519303 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:02:44.377213  519303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
I0414 11:02:44.377798  519303 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:44.378559  519303 config.go:182] Loaded profile config "functional-575216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:02:44.379165  519303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:44.379233  519303 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:44.395180  519303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
I0414 11:02:44.395779  519303 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:44.396375  519303 main.go:141] libmachine: Using API Version  1
I0414 11:02:44.396404  519303 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:44.396807  519303 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:44.397019  519303 main.go:141] libmachine: (functional-575216) Calling .GetState
I0414 11:02:44.399003  519303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 11:02:44.399063  519303 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 11:02:44.415168  519303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
I0414 11:02:44.415760  519303 main.go:141] libmachine: () Calling .GetVersion
I0414 11:02:44.416411  519303 main.go:141] libmachine: Using API Version  1
I0414 11:02:44.416444  519303 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 11:02:44.416932  519303 main.go:141] libmachine: () Calling .GetMachineName
I0414 11:02:44.417122  519303 main.go:141] libmachine: (functional-575216) Calling .DriverName
I0414 11:02:44.417312  519303 ssh_runner.go:195] Run: systemctl --version
I0414 11:02:44.417349  519303 main.go:141] libmachine: (functional-575216) Calling .GetSSHHostname
I0414 11:02:44.420451  519303 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:44.420879  519303 main.go:141] libmachine: (functional-575216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:cf:6a", ip: ""} in network mk-functional-575216: {Iface:virbr1 ExpiryTime:2025-04-14 11:59:31 +0000 UTC Type:0 Mac:52:54:00:ce:cf:6a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:functional-575216 Clientid:01:52:54:00:ce:cf:6a}
I0414 11:02:44.420905  519303 main.go:141] libmachine: (functional-575216) DBG | domain functional-575216 has defined IP address 192.168.39.89 and MAC address 52:54:00:ce:cf:6a in network mk-functional-575216
I0414 11:02:44.421079  519303 main.go:141] libmachine: (functional-575216) Calling .GetSSHPort
I0414 11:02:44.421279  519303 main.go:141] libmachine: (functional-575216) Calling .GetSSHKeyPath
I0414 11:02:44.421424  519303 main.go:141] libmachine: (functional-575216) Calling .GetSSHUsername
I0414 11:02:44.421573  519303 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/functional-575216/id_rsa Username:docker}
I0414 11:02:44.513214  519303 build_images.go:161] Building image from path: /tmp/build.1913987549.tar
I0414 11:02:44.513304  519303 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 11:02:44.526082  519303 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1913987549.tar
I0414 11:02:44.532700  519303 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1913987549.tar: stat -c "%s %y" /var/lib/minikube/build/build.1913987549.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1913987549.tar': No such file or directory
I0414 11:02:44.532741  519303 ssh_runner.go:362] scp /tmp/build.1913987549.tar --> /var/lib/minikube/build/build.1913987549.tar (3072 bytes)
I0414 11:02:44.567467  519303 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1913987549
I0414 11:02:44.580137  519303 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1913987549 -xf /var/lib/minikube/build/build.1913987549.tar
I0414 11:02:44.593400  519303 crio.go:315] Building image: /var/lib/minikube/build/build.1913987549
I0414 11:02:44.593514  519303 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-575216 /var/lib/minikube/build/build.1913987549 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0414 11:02:52.549728  519303 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-575216 /var/lib/minikube/build/build.1913987549 --cgroup-manager=cgroupfs: (7.956173702s)
I0414 11:02:52.549808  519303 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1913987549
I0414 11:02:52.574108  519303 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1913987549.tar
I0414 11:02:52.588736  519303 build_images.go:217] Built localhost/my-image:functional-575216 from /tmp/build.1913987549.tar
I0414 11:02:52.588782  519303 build_images.go:133] succeeded building to: functional-575216
I0414 11:02:52.588788  519303 build_images.go:134] failed building to: 
I0414 11:02:52.588888  519303 main.go:141] libmachine: Making call to close driver server
I0414 11:02:52.588914  519303 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:52.589271  519303 main.go:141] libmachine: (functional-575216) DBG | Closing plugin on server side
I0414 11:02:52.589322  519303 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:52.589344  519303 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 11:02:52.589356  519303 main.go:141] libmachine: Making call to close driver server
I0414 11:02:52.589367  519303 main.go:141] libmachine: (functional-575216) Calling .Close
I0414 11:02:52.589647  519303 main.go:141] libmachine: Successfully made call to close driver server
I0414 11:02:52.589660  519303 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.515310692s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-575216
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-575216 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-575216 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-6t7dt" [af6a6bad-b614-4470-a0b7-19e37990240b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-6t7dt" [af6a6bad-b614-4470-a0b7-19e37990240b] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003718521s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "294.006585ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "55.311825ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "381.288117ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "61.033799ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image load --daemon kicbase/echo-server:functional-575216 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-575216 image load --daemon kicbase/echo-server:functional-575216 --alsologtostderr: (3.407475316s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdany-port2810127246/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744628541890139236" to /tmp/TestFunctionalparallelMountCmdany-port2810127246/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744628541890139236" to /tmp/TestFunctionalparallelMountCmdany-port2810127246/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744628541890139236" to /tmp/TestFunctionalparallelMountCmdany-port2810127246/001/test-1744628541890139236
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.641345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 11:02:22.160140  510444 retry.go:31] will retry after 647.520583ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 11:02 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 11:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 11:02 test-1744628541890139236
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh cat /mount-9p/test-1744628541890139236
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-575216 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b631358b-0238-42b3-834b-48e4608b2033] Pending
helpers_test.go:344: "busybox-mount" [b631358b-0238-42b3-834b-48e4608b2033] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b631358b-0238-42b3-834b-48e4608b2033] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b631358b-0238-42b3-834b-48e4608b2033] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004257075s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-575216 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdany-port2810127246/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image load --daemon kicbase/echo-server:functional-575216 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-575216
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image load --daemon kicbase/echo-server:functional-575216 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image save kicbase/echo-server:functional-575216 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image rm kicbase/echo-server:functional-575216 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-575216
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 image save --daemon kicbase/echo-server:functional-575216 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-575216
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdspecific-port4026800028/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.106624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 11:02:32.120087  510444 retry.go:31] will retry after 436.910254ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdspecific-port4026800028/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh "sudo umount -f /mount-9p": exit status 1 (196.045203ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-575216 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdspecific-port4026800028/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 service list -o json
functional_test.go:1511: Took "838.456833ms" to run "out/minikube-linux-amd64 -p functional-575216 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.89:32431
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1470111268/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1470111268/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1470111268/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T" /mount1: exit status 1 (262.708268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 11:02:33.826779  510444 retry.go:31] will retry after 484.097137ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-575216 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1470111268/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1470111268/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-575216 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1470111268/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-575216 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.89:32431
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-575216
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-575216
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-575216
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-022919 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 11:03:40.357330  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:04:08.070273  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-022919 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.112683072s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-022919 -- rollout status deployment/busybox: (4.930920856s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-88qj9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-k4svr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-km5xl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-88qj9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-k4svr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-km5xl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-88qj9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-k4svr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-km5xl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-88qj9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-88qj9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-k4svr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-k4svr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-km5xl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-022919 -- exec busybox-58667487b6-km5xl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-022919 -v=7 --alsologtostderr
E0414 11:07:20.784296  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:20.790798  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:20.802266  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:20.823751  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:20.865208  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:20.946797  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:21.108392  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:21.429889  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:22.071267  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:23.353619  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:25.915795  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:31.037908  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-022919 -v=7 --alsologtostderr: (55.088793283s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-022919 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp testdata/cp-test.txt ha-022919:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688065247/001/cp-test_ha-022919.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919:/home/docker/cp-test.txt ha-022919-m02:/home/docker/cp-test_ha-022919_ha-022919-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test_ha-022919_ha-022919-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919:/home/docker/cp-test.txt ha-022919-m03:/home/docker/cp-test_ha-022919_ha-022919-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test_ha-022919_ha-022919-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919:/home/docker/cp-test.txt ha-022919-m04:/home/docker/cp-test_ha-022919_ha-022919-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test_ha-022919_ha-022919-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp testdata/cp-test.txt ha-022919-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688065247/001/cp-test_ha-022919-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m02:/home/docker/cp-test.txt ha-022919:/home/docker/cp-test_ha-022919-m02_ha-022919.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test_ha-022919-m02_ha-022919.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m02:/home/docker/cp-test.txt ha-022919-m03:/home/docker/cp-test_ha-022919-m02_ha-022919-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test_ha-022919-m02_ha-022919-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m02:/home/docker/cp-test.txt ha-022919-m04:/home/docker/cp-test_ha-022919-m02_ha-022919-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test_ha-022919-m02_ha-022919-m04.txt"
E0414 11:07:41.280243  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp testdata/cp-test.txt ha-022919-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688065247/001/cp-test_ha-022919-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m03:/home/docker/cp-test.txt ha-022919:/home/docker/cp-test_ha-022919-m03_ha-022919.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test_ha-022919-m03_ha-022919.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m03:/home/docker/cp-test.txt ha-022919-m02:/home/docker/cp-test_ha-022919-m03_ha-022919-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test_ha-022919-m03_ha-022919-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m03:/home/docker/cp-test.txt ha-022919-m04:/home/docker/cp-test_ha-022919-m03_ha-022919-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test_ha-022919-m03_ha-022919-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp testdata/cp-test.txt ha-022919-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688065247/001/cp-test_ha-022919-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m04:/home/docker/cp-test.txt ha-022919:/home/docker/cp-test_ha-022919-m04_ha-022919.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919 "sudo cat /home/docker/cp-test_ha-022919-m04_ha-022919.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m04:/home/docker/cp-test.txt ha-022919-m02:/home/docker/cp-test_ha-022919-m04_ha-022919-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m02 "sudo cat /home/docker/cp-test_ha-022919-m04_ha-022919-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 cp ha-022919-m04:/home/docker/cp-test.txt ha-022919-m03:/home/docker/cp-test_ha-022919-m04_ha-022919-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 ssh -n ha-022919-m03 "sudo cat /home/docker/cp-test_ha-022919-m04_ha-022919-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 node stop m02 -v=7 --alsologtostderr
E0414 11:08:01.761804  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:08:40.356843  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:08:42.724061  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-022919 node stop m02 -v=7 --alsologtostderr: (1m30.809517815s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr: exit status 7 (653.606111ms)

                                                
                                                
-- stdout --
	ha-022919
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-022919-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022919-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-022919-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:09:18.326580  524138 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:09:18.326831  524138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:18.326842  524138 out.go:358] Setting ErrFile to fd 2...
	I0414 11:09:18.326848  524138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:18.327089  524138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:09:18.327265  524138 out.go:352] Setting JSON to false
	I0414 11:09:18.327323  524138 mustload.go:65] Loading cluster: ha-022919
	I0414 11:09:18.327404  524138 notify.go:220] Checking for updates...
	I0414 11:09:18.327743  524138 config.go:182] Loaded profile config "ha-022919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:18.327789  524138 status.go:174] checking status of ha-022919 ...
	I0414 11:09:18.328332  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.328381  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.347628  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0414 11:09:18.348245  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.348944  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.348972  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.349356  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.349588  524138 main.go:141] libmachine: (ha-022919) Calling .GetState
	I0414 11:09:18.351834  524138 status.go:371] ha-022919 host status = "Running" (err=<nil>)
	I0414 11:09:18.351866  524138 host.go:66] Checking if "ha-022919" exists ...
	I0414 11:09:18.352338  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.352414  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.368596  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40135
	I0414 11:09:18.369095  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.369583  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.369606  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.369909  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.370090  524138 main.go:141] libmachine: (ha-022919) Calling .GetIP
	I0414 11:09:18.372878  524138 main.go:141] libmachine: (ha-022919) DBG | domain ha-022919 has defined MAC address 52:54:00:3f:26:3b in network mk-ha-022919
	I0414 11:09:18.373347  524138 main.go:141] libmachine: (ha-022919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:26:3b", ip: ""} in network mk-ha-022919: {Iface:virbr1 ExpiryTime:2025-04-14 12:03:28 +0000 UTC Type:0 Mac:52:54:00:3f:26:3b Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-022919 Clientid:01:52:54:00:3f:26:3b}
	I0414 11:09:18.373378  524138 main.go:141] libmachine: (ha-022919) DBG | domain ha-022919 has defined IP address 192.168.39.236 and MAC address 52:54:00:3f:26:3b in network mk-ha-022919
	I0414 11:09:18.373583  524138 host.go:66] Checking if "ha-022919" exists ...
	I0414 11:09:18.373924  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.373970  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.389291  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0414 11:09:18.389748  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.390234  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.390257  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.390708  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.390966  524138 main.go:141] libmachine: (ha-022919) Calling .DriverName
	I0414 11:09:18.391198  524138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:09:18.391252  524138 main.go:141] libmachine: (ha-022919) Calling .GetSSHHostname
	I0414 11:09:18.394780  524138 main.go:141] libmachine: (ha-022919) DBG | domain ha-022919 has defined MAC address 52:54:00:3f:26:3b in network mk-ha-022919
	I0414 11:09:18.395237  524138 main.go:141] libmachine: (ha-022919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:26:3b", ip: ""} in network mk-ha-022919: {Iface:virbr1 ExpiryTime:2025-04-14 12:03:28 +0000 UTC Type:0 Mac:52:54:00:3f:26:3b Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-022919 Clientid:01:52:54:00:3f:26:3b}
	I0414 11:09:18.395263  524138 main.go:141] libmachine: (ha-022919) DBG | domain ha-022919 has defined IP address 192.168.39.236 and MAC address 52:54:00:3f:26:3b in network mk-ha-022919
	I0414 11:09:18.395587  524138 main.go:141] libmachine: (ha-022919) Calling .GetSSHPort
	I0414 11:09:18.395750  524138 main.go:141] libmachine: (ha-022919) Calling .GetSSHKeyPath
	I0414 11:09:18.395927  524138 main.go:141] libmachine: (ha-022919) Calling .GetSSHUsername
	I0414 11:09:18.396075  524138 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/ha-022919/id_rsa Username:docker}
	I0414 11:09:18.480285  524138 ssh_runner.go:195] Run: systemctl --version
	I0414 11:09:18.487433  524138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:09:18.504681  524138 kubeconfig.go:125] found "ha-022919" server: "https://192.168.39.254:8443"
	I0414 11:09:18.504724  524138 api_server.go:166] Checking apiserver status ...
	I0414 11:09:18.504764  524138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:09:18.522707  524138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	W0414 11:09:18.533547  524138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 11:09:18.533624  524138 ssh_runner.go:195] Run: ls
	I0414 11:09:18.538259  524138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 11:09:18.544536  524138 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 11:09:18.544562  524138 status.go:463] ha-022919 apiserver status = Running (err=<nil>)
	I0414 11:09:18.544576  524138 status.go:176] ha-022919 status: &{Name:ha-022919 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:09:18.544596  524138 status.go:174] checking status of ha-022919-m02 ...
	I0414 11:09:18.544912  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.544960  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.560586  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0414 11:09:18.561077  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.561545  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.561567  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.561955  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.562157  524138 main.go:141] libmachine: (ha-022919-m02) Calling .GetState
	I0414 11:09:18.563815  524138 status.go:371] ha-022919-m02 host status = "Stopped" (err=<nil>)
	I0414 11:09:18.563830  524138 status.go:384] host is not running, skipping remaining checks
	I0414 11:09:18.563836  524138 status.go:176] ha-022919-m02 status: &{Name:ha-022919-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:09:18.563859  524138 status.go:174] checking status of ha-022919-m03 ...
	I0414 11:09:18.564157  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.564208  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.580166  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I0414 11:09:18.580660  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.581190  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.581213  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.581567  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.581881  524138 main.go:141] libmachine: (ha-022919-m03) Calling .GetState
	I0414 11:09:18.583900  524138 status.go:371] ha-022919-m03 host status = "Running" (err=<nil>)
	I0414 11:09:18.583921  524138 host.go:66] Checking if "ha-022919-m03" exists ...
	I0414 11:09:18.584248  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.584294  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.601500  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0414 11:09:18.602027  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.602607  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.602632  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.603023  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.603224  524138 main.go:141] libmachine: (ha-022919-m03) Calling .GetIP
	I0414 11:09:18.605748  524138 main.go:141] libmachine: (ha-022919-m03) DBG | domain ha-022919-m03 has defined MAC address 52:54:00:02:40:43 in network mk-ha-022919
	I0414 11:09:18.606122  524138 main.go:141] libmachine: (ha-022919-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:40:43", ip: ""} in network mk-ha-022919: {Iface:virbr1 ExpiryTime:2025-04-14 12:05:27 +0000 UTC Type:0 Mac:52:54:00:02:40:43 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-022919-m03 Clientid:01:52:54:00:02:40:43}
	I0414 11:09:18.606143  524138 main.go:141] libmachine: (ha-022919-m03) DBG | domain ha-022919-m03 has defined IP address 192.168.39.93 and MAC address 52:54:00:02:40:43 in network mk-ha-022919
	I0414 11:09:18.606271  524138 host.go:66] Checking if "ha-022919-m03" exists ...
	I0414 11:09:18.606629  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.606691  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.622135  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I0414 11:09:18.622696  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.623320  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.623348  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.623706  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.623933  524138 main.go:141] libmachine: (ha-022919-m03) Calling .DriverName
	I0414 11:09:18.624219  524138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:09:18.624242  524138 main.go:141] libmachine: (ha-022919-m03) Calling .GetSSHHostname
	I0414 11:09:18.627361  524138 main.go:141] libmachine: (ha-022919-m03) DBG | domain ha-022919-m03 has defined MAC address 52:54:00:02:40:43 in network mk-ha-022919
	I0414 11:09:18.627826  524138 main.go:141] libmachine: (ha-022919-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:40:43", ip: ""} in network mk-ha-022919: {Iface:virbr1 ExpiryTime:2025-04-14 12:05:27 +0000 UTC Type:0 Mac:52:54:00:02:40:43 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-022919-m03 Clientid:01:52:54:00:02:40:43}
	I0414 11:09:18.627856  524138 main.go:141] libmachine: (ha-022919-m03) DBG | domain ha-022919-m03 has defined IP address 192.168.39.93 and MAC address 52:54:00:02:40:43 in network mk-ha-022919
	I0414 11:09:18.628074  524138 main.go:141] libmachine: (ha-022919-m03) Calling .GetSSHPort
	I0414 11:09:18.628279  524138 main.go:141] libmachine: (ha-022919-m03) Calling .GetSSHKeyPath
	I0414 11:09:18.628433  524138 main.go:141] libmachine: (ha-022919-m03) Calling .GetSSHUsername
	I0414 11:09:18.628568  524138 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/ha-022919-m03/id_rsa Username:docker}
	I0414 11:09:18.710996  524138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:09:18.728297  524138 kubeconfig.go:125] found "ha-022919" server: "https://192.168.39.254:8443"
	I0414 11:09:18.728329  524138 api_server.go:166] Checking apiserver status ...
	I0414 11:09:18.728381  524138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:09:18.743686  524138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	W0414 11:09:18.754524  524138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 11:09:18.754585  524138 ssh_runner.go:195] Run: ls
	I0414 11:09:18.759355  524138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 11:09:18.763851  524138 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 11:09:18.763882  524138 status.go:463] ha-022919-m03 apiserver status = Running (err=<nil>)
	I0414 11:09:18.763892  524138 status.go:176] ha-022919-m03 status: &{Name:ha-022919-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:09:18.763912  524138 status.go:174] checking status of ha-022919-m04 ...
	I0414 11:09:18.764253  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.764301  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.780347  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0414 11:09:18.780852  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.781289  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.781314  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.781673  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.781883  524138 main.go:141] libmachine: (ha-022919-m04) Calling .GetState
	I0414 11:09:18.783546  524138 status.go:371] ha-022919-m04 host status = "Running" (err=<nil>)
	I0414 11:09:18.783566  524138 host.go:66] Checking if "ha-022919-m04" exists ...
	I0414 11:09:18.783974  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.784025  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.801369  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33689
	I0414 11:09:18.801868  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.802446  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.802476  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.802925  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.803143  524138 main.go:141] libmachine: (ha-022919-m04) Calling .GetIP
	I0414 11:09:18.806251  524138 main.go:141] libmachine: (ha-022919-m04) DBG | domain ha-022919-m04 has defined MAC address 52:54:00:3f:ba:b6 in network mk-ha-022919
	I0414 11:09:18.806715  524138 main.go:141] libmachine: (ha-022919-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ba:b6", ip: ""} in network mk-ha-022919: {Iface:virbr1 ExpiryTime:2025-04-14 12:06:52 +0000 UTC Type:0 Mac:52:54:00:3f:ba:b6 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-022919-m04 Clientid:01:52:54:00:3f:ba:b6}
	I0414 11:09:18.806749  524138 main.go:141] libmachine: (ha-022919-m04) DBG | domain ha-022919-m04 has defined IP address 192.168.39.207 and MAC address 52:54:00:3f:ba:b6 in network mk-ha-022919
	I0414 11:09:18.806885  524138 host.go:66] Checking if "ha-022919-m04" exists ...
	I0414 11:09:18.807243  524138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:09:18.807302  524138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:09:18.823810  524138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I0414 11:09:18.824249  524138 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:09:18.824659  524138 main.go:141] libmachine: Using API Version  1
	I0414 11:09:18.824681  524138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:09:18.825034  524138 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:09:18.825243  524138 main.go:141] libmachine: (ha-022919-m04) Calling .DriverName
	I0414 11:09:18.825424  524138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:09:18.825444  524138 main.go:141] libmachine: (ha-022919-m04) Calling .GetSSHHostname
	I0414 11:09:18.828695  524138 main.go:141] libmachine: (ha-022919-m04) DBG | domain ha-022919-m04 has defined MAC address 52:54:00:3f:ba:b6 in network mk-ha-022919
	I0414 11:09:18.829202  524138 main.go:141] libmachine: (ha-022919-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ba:b6", ip: ""} in network mk-ha-022919: {Iface:virbr1 ExpiryTime:2025-04-14 12:06:52 +0000 UTC Type:0 Mac:52:54:00:3f:ba:b6 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-022919-m04 Clientid:01:52:54:00:3f:ba:b6}
	I0414 11:09:18.829233  524138 main.go:141] libmachine: (ha-022919-m04) DBG | domain ha-022919-m04 has defined IP address 192.168.39.207 and MAC address 52:54:00:3f:ba:b6 in network mk-ha-022919
	I0414 11:09:18.829419  524138 main.go:141] libmachine: (ha-022919-m04) Calling .GetSSHPort
	I0414 11:09:18.829617  524138 main.go:141] libmachine: (ha-022919-m04) Calling .GetSSHKeyPath
	I0414 11:09:18.829801  524138 main.go:141] libmachine: (ha-022919-m04) Calling .GetSSHUsername
	I0414 11:09:18.829966  524138 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/ha-022919-m04/id_rsa Username:docker}
	I0414 11:09:18.907696  524138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:09:18.928470  524138 status.go:176] ha-022919-m04 status: &{Name:ha-022919-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 node start m02 -v=7 --alsologtostderr
E0414 11:10:04.645423  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-022919 node start m02 -v=7 --alsologtostderr: (49.119910913s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (50.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (467.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-022919 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-022919 -v=7 --alsologtostderr
E0414 11:12:20.784717  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:12:48.487324  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:13:40.357265  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-022919 -v=7 --alsologtostderr: (4m34.037659346s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-022919 --wait=true -v=7 --alsologtostderr
E0414 11:15:03.432016  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:17:20.784103  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-022919 --wait=true -v=7 --alsologtostderr: (3m13.519032669s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-022919
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (467.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-022919 node delete m03 -v=7 --alsologtostderr: (17.46572991s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 stop -v=7 --alsologtostderr
E0414 11:18:40.357229  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:22:20.785019  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-022919 stop -v=7 --alsologtostderr: (4m32.240045612s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr: exit status 7 (112.039756ms)

                                                
                                                
-- stdout --
	ha-022919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022919-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022919-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:22:49.296671  528931 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:22:49.296994  528931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:22:49.297008  528931 out.go:358] Setting ErrFile to fd 2...
	I0414 11:22:49.297012  528931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:22:49.297208  528931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:22:49.297398  528931 out.go:352] Setting JSON to false
	I0414 11:22:49.297440  528931 mustload.go:65] Loading cluster: ha-022919
	I0414 11:22:49.297502  528931 notify.go:220] Checking for updates...
	I0414 11:22:49.297993  528931 config.go:182] Loaded profile config "ha-022919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:22:49.298041  528931 status.go:174] checking status of ha-022919 ...
	I0414 11:22:49.299475  528931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:22:49.299580  528931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:22:49.316609  528931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0414 11:22:49.317110  528931 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:22:49.317632  528931 main.go:141] libmachine: Using API Version  1
	I0414 11:22:49.317658  528931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:22:49.318112  528931 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:22:49.318353  528931 main.go:141] libmachine: (ha-022919) Calling .GetState
	I0414 11:22:49.320234  528931 status.go:371] ha-022919 host status = "Stopped" (err=<nil>)
	I0414 11:22:49.320257  528931 status.go:384] host is not running, skipping remaining checks
	I0414 11:22:49.320263  528931 status.go:176] ha-022919 status: &{Name:ha-022919 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:22:49.320284  528931 status.go:174] checking status of ha-022919-m02 ...
	I0414 11:22:49.320621  528931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:22:49.320669  528931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:22:49.336027  528931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0414 11:22:49.336538  528931 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:22:49.336960  528931 main.go:141] libmachine: Using API Version  1
	I0414 11:22:49.336984  528931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:22:49.337336  528931 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:22:49.337512  528931 main.go:141] libmachine: (ha-022919-m02) Calling .GetState
	I0414 11:22:49.339028  528931 status.go:371] ha-022919-m02 host status = "Stopped" (err=<nil>)
	I0414 11:22:49.339056  528931 status.go:384] host is not running, skipping remaining checks
	I0414 11:22:49.339061  528931 status.go:176] ha-022919-m02 status: &{Name:ha-022919-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:22:49.339077  528931 status.go:174] checking status of ha-022919-m04 ...
	I0414 11:22:49.339375  528931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:22:49.339414  528931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:22:49.354492  528931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0414 11:22:49.354954  528931 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:22:49.355424  528931 main.go:141] libmachine: Using API Version  1
	I0414 11:22:49.355452  528931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:22:49.355789  528931 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:22:49.355994  528931 main.go:141] libmachine: (ha-022919-m04) Calling .GetState
	I0414 11:22:49.357482  528931 status.go:371] ha-022919-m04 host status = "Stopped" (err=<nil>)
	I0414 11:22:49.357495  528931 status.go:384] host is not running, skipping remaining checks
	I0414 11:22:49.357500  528931 status.go:176] ha-022919-m04 status: &{Name:ha-022919-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-022919 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 11:23:40.359691  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:43.849058  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-022919 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m40.424112884s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-022919 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-022919 --control-plane -v=7 --alsologtostderr: (1m17.181415537s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-022919 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-425734 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-425734 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (52.091227866s)
--- PASS: TestJSONOutput/start/Command (52.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-425734 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-425734 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-425734 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-425734 --output=json --user=testUser: (6.659251601s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-654459 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-654459 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.261368ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"089f78c3-6394-41dc-b8fb-17af695ee197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-654459] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"005953fd-05ea-4d8f-9170-778ff7b66963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20534"}}
	{"specversion":"1.0","id":"769e7ef9-d2e1-4f17-aadb-f218a8dc4fc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f58e6972-bcdd-46a5-bcc2-9fa5afbef429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig"}}
	{"specversion":"1.0","id":"87a8fb3b-c219-4e7c-843d-d8bc93a96e10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube"}}
	{"specversion":"1.0","id":"b2c37b7e-0bf1-4425-87fe-e79128e15a86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"69f1bfe1-0913-4bfa-a411-b44f0327bc8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"824ab6aa-7c83-455b-8462-450d4e19470c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-654459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-654459
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-154339 --driver=kvm2  --container-runtime=crio
E0414 11:27:20.784664  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-154339 --driver=kvm2  --container-runtime=crio: (42.618098976s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-165187 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-165187 --driver=kvm2  --container-runtime=crio: (42.75330786s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-154339
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-165187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-165187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-165187
helpers_test.go:175: Cleaning up "first-154339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-154339
--- PASS: TestMinikubeProfile (88.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-417042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 11:28:40.365435  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-417042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.085945836s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-417042 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-417042 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-435002 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-435002 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.323639673s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435002 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435002 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-417042 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435002 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435002 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-435002
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-435002: (1.28099534s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-435002
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-435002: (21.944751342s)
--- PASS: TestMountStart/serial/RestartStopped (22.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435002 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435002 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-099605 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-099605 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.762609253s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-099605 -- rollout status deployment/busybox: (5.170912678s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-4sclj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-jrx9g -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-4sclj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-jrx9g -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-4sclj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-jrx9g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-4sclj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-4sclj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-jrx9g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-099605 -- exec busybox-58667487b6-jrx9g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-099605 -v 3 --alsologtostderr
E0414 11:31:43.433398  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:32:20.784396  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-099605 -v 3 --alsologtostderr: (50.911550145s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-099605 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp testdata/cp-test.txt multinode-099605:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2856354004/001/cp-test_multinode-099605.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605:/home/docker/cp-test.txt multinode-099605-m02:/home/docker/cp-test_multinode-099605_multinode-099605-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m02 "sudo cat /home/docker/cp-test_multinode-099605_multinode-099605-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605:/home/docker/cp-test.txt multinode-099605-m03:/home/docker/cp-test_multinode-099605_multinode-099605-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m03 "sudo cat /home/docker/cp-test_multinode-099605_multinode-099605-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp testdata/cp-test.txt multinode-099605-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2856354004/001/cp-test_multinode-099605-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605-m02:/home/docker/cp-test.txt multinode-099605:/home/docker/cp-test_multinode-099605-m02_multinode-099605.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605 "sudo cat /home/docker/cp-test_multinode-099605-m02_multinode-099605.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605-m02:/home/docker/cp-test.txt multinode-099605-m03:/home/docker/cp-test_multinode-099605-m02_multinode-099605-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m03 "sudo cat /home/docker/cp-test_multinode-099605-m02_multinode-099605-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp testdata/cp-test.txt multinode-099605-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2856354004/001/cp-test_multinode-099605-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605-m03:/home/docker/cp-test.txt multinode-099605:/home/docker/cp-test_multinode-099605-m03_multinode-099605.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605 "sudo cat /home/docker/cp-test_multinode-099605-m03_multinode-099605.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 cp multinode-099605-m03:/home/docker/cp-test.txt multinode-099605-m02:/home/docker/cp-test_multinode-099605-m03_multinode-099605-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 ssh -n multinode-099605-m02 "sudo cat /home/docker/cp-test_multinode-099605-m03_multinode-099605-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-099605 node stop m03: (1.384935036s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-099605 status: exit status 7 (422.634355ms)

                                                
                                                
-- stdout --
	multinode-099605
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-099605-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-099605-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr: exit status 7 (426.120454ms)

                                                
                                                
-- stdout --
	multinode-099605
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-099605-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-099605-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:32:44.386805  536614 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:32:44.387122  536614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:32:44.387137  536614 out.go:358] Setting ErrFile to fd 2...
	I0414 11:32:44.387143  536614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:32:44.387446  536614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:32:44.387687  536614 out.go:352] Setting JSON to false
	I0414 11:32:44.387738  536614 mustload.go:65] Loading cluster: multinode-099605
	I0414 11:32:44.387788  536614 notify.go:220] Checking for updates...
	I0414 11:32:44.388290  536614 config.go:182] Loaded profile config "multinode-099605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:32:44.388329  536614 status.go:174] checking status of multinode-099605 ...
	I0414 11:32:44.388908  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.388967  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.405588  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0414 11:32:44.406024  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.406646  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.406677  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.407120  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.407359  536614 main.go:141] libmachine: (multinode-099605) Calling .GetState
	I0414 11:32:44.408877  536614 status.go:371] multinode-099605 host status = "Running" (err=<nil>)
	I0414 11:32:44.408897  536614 host.go:66] Checking if "multinode-099605" exists ...
	I0414 11:32:44.409214  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.409254  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.425211  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0414 11:32:44.425641  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.426038  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.426060  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.426399  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.426574  536614 main.go:141] libmachine: (multinode-099605) Calling .GetIP
	I0414 11:32:44.429384  536614 main.go:141] libmachine: (multinode-099605) DBG | domain multinode-099605 has defined MAC address 52:54:00:fb:79:65 in network mk-multinode-099605
	I0414 11:32:44.429806  536614 main.go:141] libmachine: (multinode-099605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:79:65", ip: ""} in network mk-multinode-099605: {Iface:virbr1 ExpiryTime:2025-04-14 12:29:55 +0000 UTC Type:0 Mac:52:54:00:fb:79:65 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-099605 Clientid:01:52:54:00:fb:79:65}
	I0414 11:32:44.429850  536614 main.go:141] libmachine: (multinode-099605) DBG | domain multinode-099605 has defined IP address 192.168.39.31 and MAC address 52:54:00:fb:79:65 in network mk-multinode-099605
	I0414 11:32:44.430010  536614 host.go:66] Checking if "multinode-099605" exists ...
	I0414 11:32:44.430294  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.430337  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.445977  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34523
	I0414 11:32:44.446510  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.446939  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.446965  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.447361  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.447530  536614 main.go:141] libmachine: (multinode-099605) Calling .DriverName
	I0414 11:32:44.447709  536614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:32:44.447727  536614 main.go:141] libmachine: (multinode-099605) Calling .GetSSHHostname
	I0414 11:32:44.450719  536614 main.go:141] libmachine: (multinode-099605) DBG | domain multinode-099605 has defined MAC address 52:54:00:fb:79:65 in network mk-multinode-099605
	I0414 11:32:44.451120  536614 main.go:141] libmachine: (multinode-099605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:79:65", ip: ""} in network mk-multinode-099605: {Iface:virbr1 ExpiryTime:2025-04-14 12:29:55 +0000 UTC Type:0 Mac:52:54:00:fb:79:65 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-099605 Clientid:01:52:54:00:fb:79:65}
	I0414 11:32:44.451149  536614 main.go:141] libmachine: (multinode-099605) DBG | domain multinode-099605 has defined IP address 192.168.39.31 and MAC address 52:54:00:fb:79:65 in network mk-multinode-099605
	I0414 11:32:44.451341  536614 main.go:141] libmachine: (multinode-099605) Calling .GetSSHPort
	I0414 11:32:44.451513  536614 main.go:141] libmachine: (multinode-099605) Calling .GetSSHKeyPath
	I0414 11:32:44.451646  536614 main.go:141] libmachine: (multinode-099605) Calling .GetSSHUsername
	I0414 11:32:44.451778  536614 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/multinode-099605/id_rsa Username:docker}
	I0414 11:32:44.530752  536614 ssh_runner.go:195] Run: systemctl --version
	I0414 11:32:44.536346  536614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:32:44.550364  536614 kubeconfig.go:125] found "multinode-099605" server: "https://192.168.39.31:8443"
	I0414 11:32:44.550421  536614 api_server.go:166] Checking apiserver status ...
	I0414 11:32:44.550472  536614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:32:44.564488  536614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1054/cgroup
	W0414 11:32:44.574508  536614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1054/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 11:32:44.574578  536614 ssh_runner.go:195] Run: ls
	I0414 11:32:44.578824  536614 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I0414 11:32:44.583680  536614 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I0414 11:32:44.583705  536614 status.go:463] multinode-099605 apiserver status = Running (err=<nil>)
	I0414 11:32:44.583719  536614 status.go:176] multinode-099605 status: &{Name:multinode-099605 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:32:44.583743  536614 status.go:174] checking status of multinode-099605-m02 ...
	I0414 11:32:44.584049  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.584105  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.600923  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
	I0414 11:32:44.601418  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.601883  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.601900  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.602237  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.602443  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .GetState
	I0414 11:32:44.604081  536614 status.go:371] multinode-099605-m02 host status = "Running" (err=<nil>)
	I0414 11:32:44.604096  536614 host.go:66] Checking if "multinode-099605-m02" exists ...
	I0414 11:32:44.604391  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.604428  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.620923  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0414 11:32:44.621436  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.621980  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.622012  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.622370  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.622550  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .GetIP
	I0414 11:32:44.625303  536614 main.go:141] libmachine: (multinode-099605-m02) DBG | domain multinode-099605-m02 has defined MAC address 52:54:00:ce:c2:ee in network mk-multinode-099605
	I0414 11:32:44.625721  536614 main.go:141] libmachine: (multinode-099605-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:c2:ee", ip: ""} in network mk-multinode-099605: {Iface:virbr1 ExpiryTime:2025-04-14 12:30:57 +0000 UTC Type:0 Mac:52:54:00:ce:c2:ee Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-099605-m02 Clientid:01:52:54:00:ce:c2:ee}
	I0414 11:32:44.625755  536614 main.go:141] libmachine: (multinode-099605-m02) DBG | domain multinode-099605-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:ce:c2:ee in network mk-multinode-099605
	I0414 11:32:44.625928  536614 host.go:66] Checking if "multinode-099605-m02" exists ...
	I0414 11:32:44.626333  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.626383  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.642241  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0414 11:32:44.642719  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.643344  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.643370  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.643676  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.643884  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .DriverName
	I0414 11:32:44.644027  536614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:32:44.644061  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .GetSSHHostname
	I0414 11:32:44.646712  536614 main.go:141] libmachine: (multinode-099605-m02) DBG | domain multinode-099605-m02 has defined MAC address 52:54:00:ce:c2:ee in network mk-multinode-099605
	I0414 11:32:44.647060  536614 main.go:141] libmachine: (multinode-099605-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:c2:ee", ip: ""} in network mk-multinode-099605: {Iface:virbr1 ExpiryTime:2025-04-14 12:30:57 +0000 UTC Type:0 Mac:52:54:00:ce:c2:ee Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-099605-m02 Clientid:01:52:54:00:ce:c2:ee}
	I0414 11:32:44.647098  536614 main.go:141] libmachine: (multinode-099605-m02) DBG | domain multinode-099605-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:ce:c2:ee in network mk-multinode-099605
	I0414 11:32:44.647262  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .GetSSHPort
	I0414 11:32:44.647437  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .GetSSHKeyPath
	I0414 11:32:44.647579  536614 main.go:141] libmachine: (multinode-099605-m02) Calling .GetSSHUsername
	I0414 11:32:44.647718  536614 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20534-503273/.minikube/machines/multinode-099605-m02/id_rsa Username:docker}
	I0414 11:32:44.730251  536614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:32:44.743469  536614 status.go:176] multinode-099605-m02 status: &{Name:multinode-099605-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:32:44.743516  536614 status.go:174] checking status of multinode-099605-m03 ...
	I0414 11:32:44.743937  536614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:32:44.744000  536614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:32:44.760431  536614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0414 11:32:44.760955  536614 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:32:44.761405  536614 main.go:141] libmachine: Using API Version  1
	I0414 11:32:44.761428  536614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:32:44.761794  536614 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:32:44.761995  536614 main.go:141] libmachine: (multinode-099605-m03) Calling .GetState
	I0414 11:32:44.763612  536614 status.go:371] multinode-099605-m03 host status = "Stopped" (err=<nil>)
	I0414 11:32:44.763631  536614 status.go:384] host is not running, skipping remaining checks
	I0414 11:32:44.763638  536614 status.go:176] multinode-099605-m03 status: &{Name:multinode-099605-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-099605 node start m03 -v=7 --alsologtostderr: (37.984463321s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (341.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-099605
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-099605
E0414 11:33:40.365615  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-099605: (3m3.032482728s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-099605 --wait=true -v=8 --alsologtostderr
E0414 11:37:20.784062  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:38:40.356910  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-099605 --wait=true -v=8 --alsologtostderr: (2m38.6686s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-099605
--- PASS: TestMultiNode/serial/RestartKeepsNodes (341.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-099605 node delete m03: (2.212105934s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 stop
E0414 11:40:23.852134  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-099605 stop: (3m1.515271403s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-099605 status: exit status 7 (90.482964ms)

                                                
                                                
-- stdout --
	multinode-099605
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-099605-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr: exit status 7 (88.448617ms)

                                                
                                                
-- stdout --
	multinode-099605
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-099605-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:42:09.586741  539673 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:42:09.586862  539673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:42:09.586870  539673 out.go:358] Setting ErrFile to fd 2...
	I0414 11:42:09.586877  539673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:42:09.587070  539673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:42:09.587228  539673 out.go:352] Setting JSON to false
	I0414 11:42:09.587262  539673 mustload.go:65] Loading cluster: multinode-099605
	I0414 11:42:09.587363  539673 notify.go:220] Checking for updates...
	I0414 11:42:09.587684  539673 config.go:182] Loaded profile config "multinode-099605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:42:09.587709  539673 status.go:174] checking status of multinode-099605 ...
	I0414 11:42:09.588203  539673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:42:09.588271  539673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:42:09.603800  539673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0414 11:42:09.604255  539673 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:42:09.604829  539673 main.go:141] libmachine: Using API Version  1
	I0414 11:42:09.604857  539673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:42:09.605216  539673 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:42:09.605399  539673 main.go:141] libmachine: (multinode-099605) Calling .GetState
	I0414 11:42:09.606912  539673 status.go:371] multinode-099605 host status = "Stopped" (err=<nil>)
	I0414 11:42:09.606932  539673 status.go:384] host is not running, skipping remaining checks
	I0414 11:42:09.606940  539673 status.go:176] multinode-099605 status: &{Name:multinode-099605 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:42:09.606975  539673 status.go:174] checking status of multinode-099605-m02 ...
	I0414 11:42:09.607274  539673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 11:42:09.607334  539673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 11:42:09.622568  539673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42299
	I0414 11:42:09.622999  539673 main.go:141] libmachine: () Calling .GetVersion
	I0414 11:42:09.623554  539673 main.go:141] libmachine: Using API Version  1
	I0414 11:42:09.623587  539673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 11:42:09.623960  539673 main.go:141] libmachine: () Calling .GetMachineName
	I0414 11:42:09.624134  539673 main.go:141] libmachine: (multinode-099605-m02) Calling .GetState
	I0414 11:42:09.625869  539673 status.go:371] multinode-099605-m02 host status = "Stopped" (err=<nil>)
	I0414 11:42:09.625885  539673 status.go:384] host is not running, skipping remaining checks
	I0414 11:42:09.625892  539673 status.go:176] multinode-099605-m02 status: &{Name:multinode-099605-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-099605 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 11:42:20.784698  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:43:40.357800  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-099605 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.867887389s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-099605 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-099605
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-099605-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-099605-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.219662ms)

                                                
                                                
-- stdout --
	* [multinode-099605-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-099605-m02' is duplicated with machine name 'multinode-099605-m02' in profile 'multinode-099605'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-099605-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-099605-m03 --driver=kvm2  --container-runtime=crio: (41.684491434s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-099605
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-099605: exit status 80 (217.480851ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-099605 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-099605-m03 already exists in multinode-099605-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-099605-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-099605-m03: (1.010283921s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.03s)

                                                
                                    
x
+
TestScheduledStopUnix (114.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-849683 --memory=2048 --driver=kvm2  --container-runtime=crio
E0414 11:48:23.436163  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:48:40.362168  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-849683 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.360423204s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849683 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-849683 -n scheduled-stop-849683
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849683 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 11:48:50.729391  510444 retry.go:31] will retry after 130.501µs: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.730546  510444 retry.go:31] will retry after 104.919µs: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.731688  510444 retry.go:31] will retry after 174.652µs: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.732829  510444 retry.go:31] will retry after 263.341µs: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.733918  510444 retry.go:31] will retry after 481.62µs: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.735047  510444 retry.go:31] will retry after 1.089562ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.737229  510444 retry.go:31] will retry after 1.318055ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.739446  510444 retry.go:31] will retry after 1.449209ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.741695  510444 retry.go:31] will retry after 2.193956ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.744932  510444 retry.go:31] will retry after 4.196877ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.750167  510444 retry.go:31] will retry after 8.563762ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.759374  510444 retry.go:31] will retry after 9.395796ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.769623  510444 retry.go:31] will retry after 11.467549ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.781890  510444 retry.go:31] will retry after 12.720239ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
I0414 11:48:50.795303  510444 retry.go:31] will retry after 19.447718ms: open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/scheduled-stop-849683/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849683 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849683 -n scheduled-stop-849683
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-849683
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849683 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-849683
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-849683: exit status 7 (77.94396ms)

                                                
                                                
-- stdout --
	scheduled-stop-849683
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849683 -n scheduled-stop-849683
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849683 -n scheduled-stop-849683: exit status 7 (68.027483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-849683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-849683
--- PASS: TestScheduledStopUnix (114.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (176.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3194706881 start -p running-upgrade-930410 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3194706881 start -p running-upgrade-930410 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m30.38336691s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-930410 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-930410 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.020513894s)
helpers_test.go:175: Cleaning up "running-upgrade-930410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-930410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-930410: (1.184061184s)
--- PASS: TestRunningBinaryUpgrade (176.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223451 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-223451 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.691675ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-223451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223451 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223451 --driver=kvm2  --container-runtime=crio: (1m34.262734283s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-223451 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-948178 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-948178 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.129875ms)

                                                
                                                
-- stdout --
	* [false-948178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:50:05.216635  544212 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:50:05.216733  544212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:50:05.216739  544212 out.go:358] Setting ErrFile to fd 2...
	I0414 11:50:05.216744  544212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:50:05.216932  544212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-503273/.minikube/bin
	I0414 11:50:05.217554  544212 out.go:352] Setting JSON to false
	I0414 11:50:05.219284  544212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":19956,"bootTime":1744611449,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:50:05.219595  544212 start.go:139] virtualization: kvm guest
	I0414 11:50:05.221560  544212 out.go:177] * [false-948178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:50:05.222863  544212 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:50:05.222887  544212 notify.go:220] Checking for updates...
	I0414 11:50:05.225426  544212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:50:05.226762  544212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-503273/kubeconfig
	I0414 11:50:05.228153  544212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-503273/.minikube
	I0414 11:50:05.229449  544212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:50:05.230726  544212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:50:05.232415  544212 config.go:182] Loaded profile config "NoKubernetes-223451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:50:05.232520  544212 config.go:182] Loaded profile config "force-systemd-env-233929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:50:05.232599  544212 config.go:182] Loaded profile config "offline-crio-209305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:50:05.232676  544212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:50:05.271423  544212 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 11:50:05.272812  544212 start.go:297] selected driver: kvm2
	I0414 11:50:05.272836  544212 start.go:901] validating driver "kvm2" against <nil>
	I0414 11:50:05.272853  544212 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:50:05.275337  544212 out.go:201] 
	W0414 11:50:05.276627  544212 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 11:50:05.277775  544212 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-948178 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-948178" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-948178

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-948178"

                                                
                                                
----------------------- debugLogs end: false-948178 [took: 2.940413308s] --------------------------------
helpers_test.go:175: Cleaning up "false-948178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-948178
--- PASS: TestNetworkPlugins/group/false (3.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (150.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.83670968 start -p stopped-upgrade-515371 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.83670968 start -p stopped-upgrade-515371 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m46.022172296s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.83670968 -p stopped-upgrade-515371 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.83670968 -p stopped-upgrade-515371 stop: (2.163852933s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-515371 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-515371 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.958213342s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (150.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223451 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 11:52:20.784221  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223451 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m0.711716323s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-223451 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-223451 status -o json: exit status 2 (247.969065ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-223451","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-223451
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-223451: (1.535150341s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (62.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223451 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223451 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.687940404s)
--- PASS: TestNoKubernetes/serial/Start (40.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-515371
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestPause/serial/Start (54.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-066593 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-066593 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (54.681277898s)
--- PASS: TestPause/serial/Start (54.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-223451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-223451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.982138ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.396654603s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-223451
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-223451: (1.746181136s)
--- PASS: TestNoKubernetes/serial/Stop (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223451 --driver=kvm2  --container-runtime=crio
E0414 11:53:40.356900  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223451 --driver=kvm2  --container-runtime=crio: (38.946719395s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-223451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-223451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.882451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m22.492652625s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0414 11:57:03.854330  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m11.278422584s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-948178 "pgrep -a kubelet"
I0414 11:57:06.381490  510444 config.go:182] Loaded profile config "auto-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gtgbk" [e0d15494-5fd7-48e5-b2b0-43d3060c7d73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gtgbk" [e0d15494-5fd7-48e5-b2b0-43d3060c7d73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00368405s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-948178 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m18.484682065s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dqtgg" [a76b5f96-8193-4010-ae69-73a826805935] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004862024s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-948178 "pgrep -a kubelet"
I0414 11:58:20.653806  510444 config.go:182] Loaded profile config "kindnet-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6k7hj" [8d3b3597-792d-4950-993b-c29b40495c61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6k7hj" [8d3b3597-792d-4950-993b-c29b40495c61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005332846s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-948178 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.415237012s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-r4wdf" [c28022b1-094a-4c89-9015-25bd27f726cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004392361s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-948178 "pgrep -a kubelet"
I0414 11:58:57.752194  510444 config.go:182] Loaded profile config "calico-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-92l8h" [38a98ee3-5cfd-4f89-8d0b-abc54a106b46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-92l8h" [38a98ee3-5cfd-4f89-8d0b-abc54a106b46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003852852s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m6.165398998s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-948178 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.287842442s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-948178 "pgrep -a kubelet"
I0414 11:59:58.393013  510444 config.go:182] Loaded profile config "custom-flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ht8cq" [5611313b-ee70-468d-8b70-557926780129] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ht8cq" [5611313b-ee70-468d-8b70-557926780129] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005715441s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-948178 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-948178 "pgrep -a kubelet"
I0414 12:00:09.823468  510444 config.go:182] Loaded profile config "enable-default-cni-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9lgzz" [70bbd717-fb84-4ef4-ae56-3f8f9f45220f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9lgzz" [70bbd717-fb84-4ef4-ae56-3f8f9f45220f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.007726859s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (21.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-948178 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-948178 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.174170679s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 12:00:38.249798  510444 retry.go:31] will retry after 1.179220643s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-948178 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-948178 exec deployment/netcat -- nslookup kubernetes.default: (5.145877743s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (21.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-948178 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m25.072411191s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-flds4" [7e2f963a-3948-44d7-aa95-048068fd9751] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005308937s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-948178 "pgrep -a kubelet"
I0414 12:01:02.128969  510444 config.go:182] Loaded profile config "flannel-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t2tqw" [6ffaa4f7-db1f-4463-96d0-c618069f3d7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t2tqw" [6ffaa4f7-db1f-4463-96d0-c618069f3d7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004805669s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-500740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-500740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m32.156155971s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-948178 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (106.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-751466 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-751466 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m46.526126319s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (106.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-948178 "pgrep -a kubelet"
I0414 12:01:51.742560  510444 config.go:182] Loaded profile config "bridge-948178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-948178 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ff54g" [f6db7bb8-9f81-4319-abb1-1d88702a57c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ff54g" [f6db7bb8-9f81-4319-abb1-1d88702a57c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003419902s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-948178 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-948178 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0414 12:10:37.758706  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:10:55.784249  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-477612 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 12:02:27.111115  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-477612 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m23.774786177s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-500740 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [feece02a-b6ca-4527-946f-96fdd0d859d2] Pending
helpers_test.go:344: "busybox" [feece02a-b6ca-4527-946f-96fdd0d859d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [feece02a-b6ca-4527-946f-96fdd0d859d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004218994s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-500740 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-500740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-500740 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-500740 --alsologtostderr -v=3
E0414 12:02:47.592778  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.425989  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.432372  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.443739  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.465185  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.506636  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.588434  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:14.749898  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:15.071916  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:15.714234  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:16.996523  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-500740 --alsologtostderr -v=3: (1m30.818180383s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-751466 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [243e9979-2432-4070-9f38-d15d5e4e84de] Pending
E0414 12:03:19.558869  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [243e9979-2432-4070-9f38-d15d5e4e84de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0414 12:03:24.681173  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [243e9979-2432-4070-9f38-d15d5e4e84de] Running
E0414 12:03:28.555113  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004015534s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-751466 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-751466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-751466 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-751466 --alsologtostderr -v=3
E0414 12:03:34.922825  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:40.357161  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-751466 --alsologtostderr -v=3: (1m31.012603154s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-477612 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da07d3a3-2c9c-41d7-8410-eda96e7736b5] Pending
helpers_test.go:344: "busybox" [da07d3a3-2c9c-41d7-8410-eda96e7736b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0414 12:03:51.499138  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:51.505624  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:51.517003  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:51.538472  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:51.579956  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:51.661446  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:51.823078  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:52.145265  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [da07d3a3-2c9c-41d7-8410-eda96e7736b5] Running
E0414 12:03:52.787380  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:54.068847  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:55.405052  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:03:56.630234  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.003724142s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-477612 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-477612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-477612 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-477612 --alsologtostderr -v=3
E0414 12:04:01.751627  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:11.993897  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-477612 --alsologtostderr -v=3: (1m31.008976699s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500740 -n no-preload-500740
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500740 -n no-preload-500740: exit status 7 (69.555254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-500740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-500740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 12:04:32.476193  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:36.366884  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:50.477077  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.631818  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.638259  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.649587  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.671024  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.712522  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.794003  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:58.955402  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:59.277596  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:04:59.919478  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:01.201424  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:03.437724  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/addons-345184/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-500740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m46.329961447s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500740 -n no-preload-500740
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-751466 -n embed-certs-751466
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-751466 -n embed-certs-751466: exit status 7 (75.912573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-751466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0414 12:05:03.763220  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-751466 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 12:05:08.885125  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.055888  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.062325  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.073714  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.095227  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.136708  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.218445  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.380164  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:10.701805  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:11.344087  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:12.625875  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:13.438497  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:15.187215  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:19.126480  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:20.309085  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-751466 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m1.082602103s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-751466 -n embed-certs-751466
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612
E0414 12:05:30.551308  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612: exit status 7 (79.089141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-477612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (333.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-477612 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 12:05:39.608232  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:51.032691  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:55.783725  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:55.790266  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:55.801778  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:55.823349  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:55.864876  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:55.946466  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:56.108161  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:56.430204  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:57.072114  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:58.288510  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/kindnet-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:05:58.353872  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:00.915738  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:06.037286  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:16.278614  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:20.569700  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/custom-flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:31.994876  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:35.360850  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/calico-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:36.759943  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:51.971747  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:51.978141  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:51.989529  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:52.010952  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:52.052399  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:52.133940  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:52.295542  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:52.617010  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:53.258885  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:54.540755  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:06:57.102790  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:02.224148  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:06.615849  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/auto-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:12.465746  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/bridge-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:17.721869  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:07:20.783973  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/functional-575216/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-477612 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m33.163105976s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (333.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-071646 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-071646 --alsologtostderr -v=3: (1.316037246s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071646 -n old-k8s-version-071646: exit status 7 (73.522846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-071646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t8ln8" [e6d6b10e-26d2-46e3-aa5b-6ff61b1359d1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t8ln8" [e6d6b10e-26d2-46e3-aa5b-6ff61b1359d1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004741858s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6twtp" [5e8f4dd9-dbf9-4e5c-adeb-7d00452e738e] Running
E0414 12:10:10.055497  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/enable-default-cni-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003724846s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6twtp" [5e8f4dd9-dbf9-4e5c-adeb-7d00452e738e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004085096s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-751466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-751466 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t8ln8" [e6d6b10e-26d2-46e3-aa5b-6ff61b1359d1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005169734s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-500740 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-751466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-751466 -n embed-certs-751466
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-751466 -n embed-certs-751466: exit status 2 (259.694931ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-751466 -n embed-certs-751466
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-751466 -n embed-certs-751466: exit status 2 (260.768555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-751466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-751466 -n embed-certs-751466
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-751466 -n embed-certs-751466
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-104469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-104469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (47.501951309s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-500740 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-500740 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500740 -n no-preload-500740
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500740 -n no-preload-500740: exit status 2 (251.518319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-500740 -n no-preload-500740
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-500740 -n no-preload-500740: exit status 2 (254.619496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-500740 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500740 -n no-preload-500740
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-500740 -n no-preload-500740
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b9vzg" [9d68bb5c-f6a9-42c8-ad00-be313540f002] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b9vzg" [9d68bb5c-f6a9-42c8-ad00-be313540f002] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003934233s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-104469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-104469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.570448004s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-104469 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-104469 --alsologtostderr -v=3: (11.359461951s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b9vzg" [9d68bb5c-f6a9-42c8-ad00-be313540f002] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004717704s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-477612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104469 -n newest-cni-104469
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104469 -n newest-cni-104469: exit status 7 (68.166557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-104469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-104469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-104469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (35.510877528s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104469 -n newest-cni-104469
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-477612 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-477612 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612
E0414 12:11:23.485775  510444 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-503273/.minikube/profiles/flannel-948178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612: exit status 2 (257.664417ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612: exit status 2 (264.605369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-477612 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-477612 -n default-k8s-diff-port-477612
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-104469 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-104469 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104469 -n newest-cni-104469
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104469 -n newest-cni-104469: exit status 2 (251.064308ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104469 -n newest-cni-104469
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104469 -n newest-cni-104469: exit status 2 (255.579191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-104469 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104469 -n newest-cni-104469
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104469 -n newest-cni-104469
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.14
266 TestNetworkPlugins/group/cilium 3.38
277 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-345184 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-948178 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-948178" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-948178

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-948178"

                                                
                                                
----------------------- debugLogs end: kubenet-948178 [took: 2.987553825s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-948178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-948178
--- SKIP: TestNetworkPlugins/group/kubenet (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-948178 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-948178" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-948178

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-948178" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-948178"

                                                
                                                
----------------------- debugLogs end: cilium-948178 [took: 3.235605083s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-948178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-948178
--- SKIP: TestNetworkPlugins/group/cilium (3.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-128843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-128843
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard